Thursday, May 21, 2020

Going beyond the Defaults

With a new job, comes a new mobile phone. The brand new version of iPhone X is an upgrade to my previous iPhone 7, except for color - the pretty rose gold I come to love is no longer with me. The change experience is fluent, a few clicks and credentials, and all I need is time for stuff to sync for the new phone.

As I start using the phone, I can't help but noticing the differences. Wanting to kill an application that is stuck, I struggle when there is no button to double click to get to all running applications. I call out for my 11-year-old daughter to rescue and she teaches me the right kind of swipes.

Two weeks later, it feels as if there was never a change.

My patterns of phone use did not change as the model changed. If there's more power (features) to the new version, I most likely am not taking advantage of them, as I work on my defaults. Quality-wise, I am happy as long as my defaults work. Only the features I use can make an impact on my perception of quality.

When we approach a system with the intent of testing it, our own defaults are not sufficient. We need a superset of everyone's defaults. We call out to requirements to get someone else's model of all the things we are supposed to discover, but that is only a starting point.

For each claim made in requirements, different users can approach it with different expectations, situations, and use scenarios. Some make mistakes, some do bad things intentionally (especially for purposes of compromising security). There's many ways it can be used right ("positive testing") and many ways it can be used wrong ("negative testing" - hopefully leading to positive testing of error flows).

Exploratory testing says we approach this reality knowing we, the testers, have defaults. We actively break out of our defaults to see things in scale of all users. We use requirements as the skeleton map, and outline more details through learning as we test. We recognize some of our learnings would greatly benefit us in repeating things, and leave behind our insights documented as test automation. We know we weren't hired to do all testing, but to get all testing done and we actively seek everyone's contributions.

We go beyond the defaults knowing there is always more out there.