Thursday, July 31, 2014

Pairing to slow down a developer from starting new items

My last two posts have been about small batches and crazy reasons that make small batches take a long calendar time. As usual, I wasn't happy with just saying I don't like how things were, I needed to try something different.

First, I sent an email to the developer and our manager suggesting that I have two ideas of how to slow the developer down after done-without-testing as I felt that taking another task when this one was not yet really done was premature and source of problems. I suggested to:
  1. Allow the developer to spend time studying instead of moving to another interesting feature he would not like to come back from. In general, the option of just moving to another task should be less tempting and the realization of getting quickly back to the thing that isn't yet done should remain the priority.
  2. Make the developer responsible for the testing task following the development task, with the realization that he will need to invite me to pair up with him to do it well enough for now. 
The option to choose between the two wasn't real. Number 2 was the only real option. Number 1 was to emphasize that we actually were supposed to have a rule on limiting work in progress.

So today, the developer walked up to me first time asking me to test with him. So far I've been the one polling for a proper time and done that way too rarely. We sat down with two computers and two browsers, and started testing.

With all the issues fixed from last round, I asked if he was confident in the feature now. He said he wasn't - because there was always something I would see that he just didn't. We talked about the starting choices of what to do first and why, and he seemed a bit puzzled on my way of thinking - the problem area seemed more condensed to him with less links around the product.

We found problems in the changes he had just made, and we went through how it should work in different scenarios testing it through. He would take a pause from testing, fix things in code and deploy a new version that made things better.

When he seemed to think we were done, I wanted to try a few more things. Just because I just thought they could be connected and I had not tried them yet. We found problems specific to IE9 and how data (unneeded null values) got saved in editing scenarios. We learned that there was a connection to history feature that was forgotten during development and did not work. And that there's a multiple selection feature that this feature wasn't compatible with.

With a few hours of testing, we looked at the problems we now knew and agreed it would take a bit of time to come up with how to deal with them, and alone time might be useful. Before we parted, we still tested the other feature he had been working on that left me waiting on Monday and found no issues. And agreed we'd spend an hour together on each feature he completes from now on, because it's just more efficient and fun.

Need to keep reminding the developers that they can't outsource quality to me. Even if they know I will find problems they miss, that just isn't acceptable approach. Collaboration is good. 

Tuesday, July 29, 2014

Need for Speed, Motivation is the Key

Based on my previous post on waiting and having business value items in the inventory of unfinished work for a long time due to juggling to other work, a colleague in twitter left me thinking about batch sizes.

He pointed out that with a small batch size, the wait time would be shorter. It's a great observation, but also in my experience, one of the most misunderstood principles in practice.

What is a small batch? I would think that is the smallest possible value you can add, for now, into the software. A batch is not a task. It's not that you've done "design" for something, but you need to go through all the layers to actually have the resulting value in production. Small  batches are the heart of incremental development and delivery, and from a perspective of testing, I would claim I've been getting better to help my teams find the smallest possible things we could add.

The challenges with small batches tend to come with the tasks some people refuse to acknowledge. Like testing. If there's a really small change, adding six lines of code, it shouldn't take long to get that tested, right? Unfortunately, quite often the view into the system as delivering value connected with all kinds of things means that the very small change isn't a small one to test. And in particular, it isn't a small one to fix, as one (or more) of the connections were completely dismissed in the implementation. There's a very relevant, common use case that just wasn't in the focus of the change now.

So, small batches to some may be bigger to some others. And these surprises in how they are sized up are causing the waiting problems, as one has already "moved on", when more work on the same small thing arrives. The developer never committed to complete the value, just his idea of what tasks would be needed to deliver the value. And he is mentally engaged on the other tasks already when he is told, with examples, that it just isn't done yet. The message is not welcome, leading to a wait time.

This all is a vicious cycle. It eats away morale of us all. The tester, the developer, the manager. None of us is happy. So how we don't get it changed is a mystery to me.

There culture includes a "need for speed". Doing things continuously, incrementally fast. Just add more. Change it. And don't break it while changing. There's no test automation to support this. The building block of speed is the developer, who is pushed for finding a minimal solution that could be delivered quickly. The ideas of someone testing are new and are mostly making things worse. The time was already up by time of delivery, and now the poor developer has fixes to create, while feeling the pressure of the next quick feature is the queue. And often not just the external pressure but also internal - the next feature is more fun that fixing the old one.

Let me look at a small batch from one more angle. A really small batch without big testing implications that would look like an hour of work - it takes three days. Or in other words, a work week fits just two of those types of issues. When a developer is pressured with time, they learn to protect their other interests. The motivation is low as you're all alone with your assigned task, and you might not have the skills to come up with a solution. The first day goes into just procrastinating with the idea of having to do this. Google a little, read whatever you find, talk to your colleagues. Not directly on the matter of course, otherwise they might say that you're not good enough. The second day you try out some of the stuff and none works. On third day you ask directly someone who knows and can complete the task in an hour. And after the other developer did the actual bit, you spend time testing, checking and finalizing, as your name will still go on the fix.

I believe that while the idea of small batches is great, the relevant bit here is to do something about developer happiness, motivation and skills. There's a reason for any of us rather spending time procrastinating than working efficiently towards delivering the value. The pairing and other changes I suggest threatens the hidden self-development time. It threatens the permission to work remotely and not talk to your team mates for days. But it also holds a lot of potential to make us happier, all of us in the team.

How many "small batches" does your work week fit? Doesn't the whole concept of small batches make you feel bad about not completing so many of them in your week? I love the idea for trying to split things rationally from a value perspective, but I hate the work monitoring implications.

Not all batches and skills needed in them are equal. Not all days for a professional are equal. I'm great at procrastinating, letting my mind wander and building connections that wouldn't be immediately relevant for the task at hand. But the stuff I'm learning and realizing while at it, makes a big difference in the end result. But I also finish the work. Sometimes when I was supposed to. Sometimes early, sometimes later. Motivation and feeling the purpose make a big difference on whether a small batch is small in actual calendar time.

Monday, July 28, 2014

Experience on how to turn testing into not fun

I'm back from vacation, with new energy and the old love for my profession in testing. Testing is great, it's fun, it's challenging. Except when it isn't.

I started off with (re)testing a small feature I had noted that didn't work in so many ways just before my vacation. Instead of us getting the feature finished and a feeling of mutual success on making a delivery, I was told that it would be fixed while I was gone. Well, some of it was fixed, but in general it still doesn't work. The users could not do valuable work with it. It doesn't work in real scenarios.

I talk with the developer to hear that the stuff I had reported before are "special cases" - relevant to the user, but all requiring logic he just hasn't yet implemented as he thought it would be special. I mention the new problems and they are deemed "weird behaviors" - we do agree. I'm concerned of the severity of the issues for the end user, blocking the delivery of added value. The developer seems to be concerned with the technical challenge of it, how could the code do things like that. I just know it does, from experience.

I ask if the developer tested the feature himself, if he actually uses it and he tells me he does. The blatant difference in our experience calls out for pairing to test it, to share the experience. With the new energy, I'm about to suggest that.

But, just before I suggest pairing (which isn't a normal way for us to work, sadly) the developer points out that he has started working on another task that will take him days to weeks and that he will not look into the "weird behaviors" before he's done with the other work. I remember again what really frustrates me - unfinished features waiting in inventory. Features that don't improve while they wait, features that get more broken when they are "fixed" after weeks of wait time. And the repeated experience of testing something that doesn't work for me, but presumably always works for the developer.

Waiting sucks. Finding easy bugs sucks. Not collaborating sucks. This isn't testing, this is starting testing again and again being blocked right in the beginning. Testing would be great. I hope I get to do more of that soon.