Saturday, May 23, 2020

Five Years of Mob Testing, Hello to Ensemble Testing

With my love of reflection cycles and writing about it, I come back to a topic I have cared a great deal for in the last five years: Mob Testing.

Mob Testing is this idea that instead of doing our testing solo, or paired, we could bring together a group of people for any testing activities using a specific mechanism that keeps everyone engaged. The specific mechanism of strong-style navigation insists that the work is not driven by the person at the keyboard, but someone hands-off keyboard using their words enabling everyone to be part of the activity.

From Mob Programming to Mob Testing

In 2014, I was organizing Tampere Goes Agile conference and invited a keynote speaker from the USA with this crazy idea of whole team programming his team called Mob Programming. I remember sitting in the room listening to Woody Zuill speak, and thinking the idea was just too insane and it would never work. The reaction I had forced a usual reaction: I have to try this, as it clearly was not something I could reason with.

By August 2015, I had tried mob programming with my team where I was the only tester in the whole organization, and was telling myself I did it to experience it, that I did not particularly enjoy it, and that it was all for the others. True to my style, I gave an interview to Valerie Silverthorne, introduced through Lisa Crispin and said: "I'm not certain if I enjoy this style of working in the long term."

September 2015 saw me moving my experimenting with the approach away from my workplace into the community. In September, I run a session on Mob Testing on CITCON open space conference in Helsinki, Finland. A week later, I run another session on Mob Testing at Testival open space conference in Split, Croatia. A week later, in Jyväskylä, Finland. By October 22nd, I had established what I called Mob Testing as I was using it on my commercial course as part of TinyTestBash in Brighton, UK.

I was hooked on Mob Testing, not necessarily as a way of doing testing, but as a way of seeing how other people do testing, for learning and teaching. Something with as much implicit knowledge and assumptions, doing the work together gave me an avenue to learn how others thought while they were testing, what tools they were using and what mechanisms they were relying on. As a teacher, it allowed me to see if a model I taught was something the group could apply. But more than teaching, it created groups that learned together, and I learned with them.

I found Mob Testing at a time when I felt alone as a tester, in a group of programmers. Later as I changed jobs and was no longer the only one of my kind, Mob Testing was my way of connecting with the community beyond chitchat of conceptual talk and definition wars. While I run some trainings specifically on Mob Testing, I was mostly using it to teach other things testing: exploratory testing (incl. an inkling to documenting as automation), and specific courses on automating tests.

Mob Testing was something I was excited about so that I would travel to talk about to Philadelphia, USA as well as Sydney, Australia, and a lot of different places between those. November 2017 I took my Mob Testing course to Potsdam, Germany for Agile Testing Days. I remember this group as a particularly special one, as it had Lisi Hocke as participant, and from learning what I had learned, she has taken Mob Testing further than I could have imagined. We both have our day jobs in our organizations, and training, speaking and sharing is a hobby more than work.

A year ago, I learned that Joep Schuurkes and Elizabeth Zagroba were running Mob Testing sessions at their work place, and was delighted to listen to them speak of their lessons on how it turned out to be much more of learning than contributing.

We've seen the community of Mob Programming as well as Mob Testing grow, and I love noticing how many different organizations apply this. Meeting a group I talk to about anything testing, it is more of a rule that they mention that somehow them trying out this crazy thing is linked back to me sharing my experiences. Community is powerful.

Personally, I like to think of Mob Testing as a mechanism to give me two things:
  1. Learning about testing
  2. Gateway to mob programming 
I work to break teams of testers and grow appreciation of true collaboration where developers and testers work so closely that it gets easy renaming everyone developers.

Over the years, I wrote a few good pieces on this to get people started:


With a heavy heart, I have listened to parts of the community so often silenced on the idea that mob programming and testing as terms are anxiety inducing, and I agree. They are great terms to specifically find this particular style of programming or testing, but need replacing. I was working between two options: group programming/testing and ensemble programming/testing. For recognizability, I go for the latter. I can't take out all the material I have already created with the old label, but will work to have new materials with the new label. Because I care for the people who care about stuff like this.




Feature and Release Testing

Back in the day, we used to talk about system testing. System testing was the work done by testers, with an integrated system where hardware and software were both closer to whatever we would imagine having in production. It usually came with the idea that it was a phase after unit and integration testing, and in many projects integration testing was same testing as system testing but finding a lot of bugs, where system testing was to find a little bugs and acceptance testing ended up being the same tests but now by the customer organization finding more bugs that what system testing could find.

I should not say "back in the day", as for the testing field certification courses, these terms are still being taught as if they were the core of smartassery testers need. I'm just feeling very past the terms and find them unhelpful and adding to the confusion.

The idea that we can test our software as isolated units of software and in various degrees of integrated units towards a realistic production environment is still valid. And we won't see some of the problems unless things are integrated. We're not integrating only our individual pieces, but 3rd party software and whatever hardware the system runs on. And even if we seek problems in our own software, the environment around matters for what the right working software for us to build is.

With introduction of agile and continuous integration and continuous delivery, the testing field very much clung to the words we have grown up with, resulting in articles like the ones I wrote back when agile was new to me showing that we do smaller slices of the same but we still do the same.

I'm calling that unhelpful now.

While unit-integration-system-acceptance is something I grew up with as tester, it isn't that helpful when you get a lot of builds, one from each merge to master, and are making your testing way through this new kind of jungle where the world around you won't stop just so that you'd get through testing that feature you are on on that build you're on, that won't even be the one that production will see.

We repurposed unit-integration-system-acceptance to test automation, and I wish we didn't. Giving less loaded names to things that are fast to run or take a little longer to run would have helped us more.

Instead of system testing I found myself talking about feature/change testing (anything you could test for a change or a group of changes comprising a feature that would see the customer's hands when we were ready) and release testing (anything that we needed to still test when we were making a release, hopefully just run of test automation but also a check of what is about to go out).

For a few years, I was routinely making a checklist for release testing:

  • minimize the tests needed now, get to running only automation and observing automation running as the form of visual, "manual" testing
  • Split into features in general and features being introduced, shining special light to features being introduced by writing user oriented documentation on what we were about to introduce to them
From tens of scenarios that the team felt that needed to be manually / visually confirmed to running a matrix test automation run on the final version, visually watching some of it and confirming the results match expectations. One automated test more at a time. One taken risk at a time, with feedback on its foundation. 

Eventually, release testing turned into the stage where the feature/change testing that was still leaking and not completed was done. It was the moment of stopping just barely enough to see that the new things we are making promises on were there. 

I'm going through these moves again. Separating the two, establishing what belongs in each box and how that maps into the work of "system testers". That's what a new job gives me - appreciation of tricks I've learned so well I took them for granted. 





Thursday, May 21, 2020

Going beyond the Defaults

With a new job, comes a new mobile phone. The brand new version of iPhone X is an upgrade to my previous iPhone 7, except for color - the pretty rose gold I come to love is no longer with me. The change experience is fluent, a few clicks and credentials, and all I need is time for stuff to sync for the new phone.

As I start using the phone, I can't help but noticing the differences. Wanting to kill an application that is stuck, I struggle when there is no button to double click to get to all running applications. I call out for my 11-year-old daughter to rescue and she teaches me the right kind of swipes.

Two weeks later, it feels as if there was never a change.

My patterns of phone use did not change as the model changed. If there's more power (features) to the new version, I most likely am not taking advantage of them, as I work on my defaults. Quality-wise, I am happy as long as my defaults work. Only the features I use can make an impact on my perception of quality.

When we approach a system with the intent of testing it, our own defaults are not sufficient. We need a superset of everyone's defaults. We call out to requirements to get someone else's model of all the things we are supposed to discover, but that is only a starting point.

For each claim made in requirements, different users can approach it with different expectations, situations, and use scenarios. Some make mistakes, some do bad things intentionally (especially for purposes of compromising security). There's many ways it can be used right ("positive testing") and many ways it can be used wrong ("negative testing" - hopefully leading to positive testing of error flows).

Exploratory testing says we approach this reality knowing we, the testers, have defaults. We actively break out of our defaults to see things in scale of all users. We use requirements as the skeleton map, and outline more details through learning as we test. We recognize some of our learnings would greatly benefit us in repeating things, and leave behind our insights documented as test automation. We know we weren't hired to do all testing, but to get all testing done and we actively seek everyone's contributions.

We go beyond the defaults knowing there is always more out there. 

Monday, May 18, 2020

The Foundation Moves - Even in Robot Framework with Selenium

As my work as tester shifts with a new organization, so do things I am watching over. One of the things the shift made me watch over more carefully is the Robot Framework community. While I watch that with eyes of a new person joining, I also watch it with eyes of someone with enough experience to see patterns and draw parallels.

The first challenge I pick on is quite a good one to have around: old versions out there and people not moving forward from those. It's a good one because it shows the tool and its user community has history. It did not emerge out of the blue yesterday. But history comes with its set of challenges.

Given a tool with old versions out there that the main maintainers do their best deprecating, the maintainers have little power over the people in the ecosystem:

  • Course providers create their materials with a version, and for free and open courses, may not update them. Google remembers the versions with outdated information.
  • People in companies working with automating tests may have learned the tricks of their trade with a particular version, and  they might not have realized that we need to keep following the versions and the changes, and maintaining tests isn't just keeping them running in our organization but proactively moving them to latest versions.
Robot Framework has a great, popular example of this: the Robot Framework Selenium Library. Having had the privilege of working side by side in my previous place with the maintainer Tatu Aalto, I have utmost respect for the work he does. 

The Robot Selenium Library is built on top of Selenium. Selenium has moved a lot in recent years, first to the Selenium WebDriver over the Selenium RC, and then later to versions 3 and 4 of the Selenium WebDriver to move Selenium towards a browser controlling standard integrated quite impressively with the browsers. As something built on top, it can abstract a lot of this change for its users, for both good and bad. 

The good is, the users can rely on someone else's work (your contributions welcome, I hear!)  to keep looking at how Selenium is moving. As a user, you can get the benefits of changes by updating to the newer version. 

The bad is, there might be many changes that allow you to not move forward, and it is common to find versions you would expect to be deprecated in code created recently. 

Having the routine of following your dependencies is an important one. Patterns of recognizing that you are not following your dependencies are sometimes easy to spot. 

With Robot Framework, let me suggest a rule: 
If your Robot Framework code introduces a library called Selenium2Library, it is time for you to think about removing the 2 in the middle and learning how you can create yourself a cadence of updating to latest regularly. 
Sometimes it is just as easy as allowing for new releases (small steps help with that!). Sometimes it requires you join information channels and follow up on what goes on. 

In the last years, we have said goodbye to Python 2 (I know, not all - get to that now, it is already gone) and Selenium has taken leaps forward. Understanding your ecosystem and being connected with the community isn't good thing to abstract away.  

In case you did not know, Robot Framework has a nice slack group - say hi to me in case you're there!