Monday, August 31, 2015

Selenium on my mind

Selenium Webdriver has been the theme of my week - a theme that would appear to continue because I choose to continue. I’m struggling with my feelings about this, as I find working on the test automation the most boring part of my work I could volunteer on. But I volunteer to investigate deeper how I think while testing this way (working with tests as artifact focus) and how I think while testing the usual way (working with improvisation on the application with tests as performance). I have a working theory that I’m seeking evidence against: that I find more complicated yet relevant bugs when exploring without the form of artifacts keeping me on a leash. As a sort of corollary, I’m investigating a claim that I don’t enjoy test automation because I’m not fluent in it, through becoming more fluent before dumping the activity - unless I change my mind on what I want to spend my life on.

In the last week, I’ve done pretty much all-Selenium activities. Mobbing Selenium tests with my whole team. Pairing with a summer intern on creating more Selenium tests. Group/pair activities have been helpful as they both fulfill my need of being social at work, but also keep me honest and progressing on task that really isn’t one of my favorites.

I seem to find some of my motivation from playing with the dynamics. While last week we focused our efforts into automating scenarios for existing features that should be monitored, today we experimented with focusing on a completely new feature I’ve never tested manually before.

I was navigating (fixed pairing style since last session and was much happier with the work) and while I had many scenarios on my mind, I chose first one that I’d like to see included in the tests. It was almost the simplest one, but with a twist: a theory of error, a hunch I find so common on what would be likely wrong.

The feature was tiny. We have a dialog that lists packages, where we have previously hidden all packages considered “ready” for a particular user role. We now needed to extend it to show the ready packages. To test this, we need to play with two different user roles to create different states of the packages that then should or should not be visible.

My first hunch was that if something was wrong, one thing that would be wrong is that there’s two types of “ready” packages, ones that are made visible for the other role and others that are not. And that’s what I chose as our first scenario. We clicked through it manually to note that there’s a bug. We laughed at my ability to guess priorities and decided to try out something a developer from outside our organization once suggested me: reporting a bug with a failing test. The little framing difference created just enough variance for me to get out of my general Selenium boredom that I fight, and we were progressing very nicely on automating. There was all the usual obstacles we see with finding the right things to wait on so that the test would run reliably, but those felt more of a thing we just keep seeing on the route.

When the test was done and we run it, it passed. We were expecting a fail with the last thing we were checking. The bug was not visible, although both of us knew it was there. So we started looking into steps of our test where we reused existing pieces and isolating the exact conditions that would cause this bug. We learned that the bug would show only if there was exactly one user of the other role to handle the package, and extended our page elements and test data to include differences in the scenario.

Looking back, we used about 5 times the effort on isolating the bug and reporting it this way over the usual “Jira issues” way. But we did not do this to save time, but to try our how it would feel.

As we finished the test, the developer whose implementation we were testing walked by and got excited about the failing test. He suggested pairing to fix the problem. I was just leaving with other things scheduled, but our summer intern - my pair of the day, volunteered to pair on with the fix. He seemed really happy with getting to work on the application instead of just the tests. Sad that it had to happen for the first time on his last day at work.


  1. First of all (as I think this is my first comment here) I want to thank you - I've been reading this blog for a while now, and I found almost every post interesting.

    I think it's the first time I encounter someone that is referring to writing automation as something intended to find new bugs, so I'll have to digest this idea a bit. My first hunch, though, is that when writing my automation, I stumble across stuff I wouldn't look at otherwise, so I find different kind of bugs.
    A couple of examples might be in place -
    I was going over the tedious task of mapping a DB table to a Java object - As I was going through the steps adding the 20 or so new fields, I stumbled across some mistakes in the column names, types or sizes - this is probably not something I would have noticed without going through this task.
    A second example is when we integrated a new component to our system. this one had a nasty GUI, so in order to save us test time and stability issues we decided to communicate directly with the back-end of this component, only we were too lazy to implement a proper login flow, so we faked a static "security context" - and apparently we could perform actions in the system claiming to be a non-existing user, or even a proper user, but without actually logging in.

    Anyway, I think I have a suggestion for you that might help you with the boredom that strikes you when you face writing selenium tests - try to think of it as an opportunity to look at places in your application you don't usually focus on.
    A while ago, we have decided that we want to maintain a product- requirements-tree. This usually does not fit in well in any Agile environment, but we were getting lost in the details and this was our way to try and tackle this. In theory, all of this work is mostly duplication of stuff written elsewhere or spoken of and done quickly - so "boring" is a good description of it. After working on it for a while I found that by restructuring requirements into this format I am able to find missing or conflicting requirements, as well as think of new interesting test cases. Since then, I look at that phase as a chance to think quietly on the new feature - spot the gaps we have and come up with interesting questions I want answered (be those questions "how does our application behave when...? " or "should we also do ...?"). While writing automation isn't exactly the same, I think that looking on it as an investigation of less frequented areas can make this more interesting for you.

    1. Thanks for sharing your experiences! I particularly liked your examples, they gave a lot to think about.

      I've been working with this application since the first likes of code of it were written. For me, there isn't areas that I have not looked at before, except for the new stuff coming in right now. So making it more interesting, I find that intertwining automation creation with completely new features works better.

      I don't find any testing I do boring, and I also find that sometimes adding 20 of something puts my brain in a sleep-like mode that leaves it to wonder while on the mundane task, and I find some of my most creative ideas that way: feeding some task that does not require brainpower.

      I'm still working with an open end on this automation stuff and investigating happiness resulting from it from a very personal level that is not necessarily same for others. It leaves me thinking leaving the industry with the trend of everyone coding, as I find only fun to code stuff that I'm thinking about myself. Perhaps I will transform into a "coder for hire" (automation is programming), now I still feel that there's no money high enough to feel this way at work especially since the world is full of other options. But things can change.