Tuesday, February 26, 2013

What do you mean, I should support this?

I'm enjoying my tester privilege of appreciating nice bugs. There was one particularly worth appreciation with our product last week, when we learned that there are proxies that do encoding-transformations on our URLs thus breaking our product that won't handle the encoding (around &-character in particular).

The customer wouldn't know why they get a big visible error. They just tell us it doesn't work. Or to be a little more precise, they tend not to tell us, we get to see their experiences from our logs when they are of that  type.

After realizing what was happening, it was relatively easy to go around.

The fun part with the bug started, when I told my other development team about this bug - that we should, as a team, also remember that our product works in an environment we're not in control of. The response was quite direct and can be paraphrased as "not our responsibility". With a little discussion, we are now with the conclusion that it is our responsibility thought the technologies and architectures we build on. Some handle this better than others, and if we choose one that needs a bit of support on that, we'd handle it if only we understand we should or could.

It's so easy to see things as external even though for the customer they are just one part of the product experience.

Saturday, February 23, 2013

Thinking around numbers

Doing a trial webinar (with real content I had not talked of before in that detail) on Experiences with Remote testing, I had questions related to one slide that left me thinking for hours.
The numbers you can see on the slide show two trends:
1. I've logged a lot less bugs since other, remote tester joined
2. My bugs on the timeframe with us too have been received better as in fixed in higher percentages

The questions were related to the percentage of fixing - why is there such a seemingly significant difference?

I had looked at the data under the numbers while I created the slide, and my impression was - which I also told in the webinar - that it's due to the essentially different assignments each of us takes. That the remote tester may be asked to test something that is not yet in use but is going out, but I might be testing something that is already in use for significant numbers of users. It's not quite so simple.

There's two projects we need to work on. To keep things simple in the start, I've been the one who jumps between the two and have allowed the remote tester more focus to learn one before jumping between the two. This was also as the other project we both worked on is more business-critical, and the one only I worked on is completely new product that saw its first release in December.  So the remote tester has only worked with the "established, already in production but new features added" product, whereas I work on both that and a new product.

Similarly, the features we've worked on have been in quite a different stage for development. The bigger numbers on time before Remote tester joined, show me participating on a major refactoring effort where basically an area was redone, and we were quite keen on making sure its better quality-wise than before. She's worked on areas that are "ready" and in production, showing problems in areas that are not otherwise changing and finding people to take the areas and fix the issues has been different.

Splitting the percentages to compare per project / product, the fixed percentage is for the old-and-in-maintenance Local: 44 %, Remote: 35 % and for the new-product Local: 90 %,  Remote: 66 %. But, remote tester has only 3 issues logged in the new product as she as worked on that for less than two weeks. All of a sudden, the difference does not seem quite so drastic - I just factored in the two products that have a different working atmosphere for testing.

Another thing that impacts both our numbers / percentages with the in-maintenance -product is a change in our bug process that was taken into action quite soon after the remote tester started. Now all bugs we report go first for triage with the product manager, who is super busy and finds many things more relevant than telling something must be fixed right there and then. The product manager worries that if she rates the bugs as important, she loses some of the features she wants from the increment. And judging if the bug is newly introduced or not is just as much work for the product manager as it is for the testers - and we end up leaving even more tails of quality debt than before. The change was done as there was, with two testers, so many things to go through for the developers that the project manager felt the need of protection - with side effects. We're getting just again to the point where this bottleneck is relieved again - shows in the bugs that we've been trying to solve in the last months being solved in last week, allowing the freedom of choice back with the developers. 

Then there are smaller factors to the numbers. I get my numbers up by logging issues I identify from us talking or from logs - I dare to assign work for the developers even if I haven't done all the possible investigation myself. There's one of me, and there's 8+4 of them - none can assume all testing is done by me. And I find the work to assign by talking with them, that's something I get from being local. I also avoid logging, as my favorite phrase turns out to be "will you fix it today or do I need to put it in Jira". And I can raise priorities of the issues I log by talking with the developers, bringing them to a point where the analysis was done together so there's no reason to not fix anymore. I can go around the ridiculous process change, whereas the remote tester can't.

And one factor is that as there's so much work, I don't bother reporting things I know we wouldn't address right now. I'll do that later. So, I tend to be more connected with what is happening in development and what are its guiding principles, and am able to use the information to target my messages of bugs better. But I've had six months more to learn what they don't care for - and what I can make them care for.


Saturday, February 16, 2013

External contractors on my mind

On my two last jobs, I've been on the customer-side in the contracting business.

My current place of work uses contractors to extend our capabilities, to get more people when the load is high. Recently we've had one contractor used for development people, and another contractor used for (exploratory) testing people. The contracting is really that - getting the right people.

In my previous place of work, the whole organization I worked in was built around controlling contracting. Little to none was done in-house. The focus was not on people we'd contract, but the next incremental change with a fixed price and externalized responsibility. Whole different ballgame that I don't necessarily believe in.

Since we now focus on people, I'm looking at interesting differences and more of a feeling of success - as much as there is success in developing software with all the uncertainties. There's two differences that stand out in particular:
  1. Incremental planning with focus on work remaining
  2. Accepting people, not deliverables
Incremental planning with focus on work remaining

With a very recent contracting project, we're requiring monthly visible plan, monitoring the work remaining for the tasks to be complete and a burndown chart. When we see that the work remaining grows, we don't start a hassle of going over the budget - we adjust the budget or the scope. We know the people we have, extending our own capabilities, have not worked with our product before, and putting too much effort into guessing right the effort needed and making them heavily accountable would steer our focus away from the great product to protection. They deliver continuously, and we try to treat them in the same way we treat our own people.

It was interesting to see that a week into development, we could already adjust the schedules and talk about the real pace in which we think we may be able to proceed. 

The contactor delivers us code that is supposed to do what we've defined, at least in the ballpark. If it would do something completely different or inadequate, and we'd provide no on-time feedback through testing, they'd be ready to redo it any time - since all work is billable by hour.  But we'd be the one missing our schedule window, which is much more relevant  than showing responsibility against a fixed price work order.

Accepting people, not deliverables

Our product is created by people. If they're skilled, it tends to show in the results. With contractors we've adopted an early feedback approach, where the acceptance is about looking at what people produce by peers. The long-term developers look at the contractors newcomers code, with detailed comments on making sure it fits our expectations of a maintainable product.

It was great to listen to developers fuzz over the insightful solutions our first new joiner brought into the code. I could feel the energy of seeing the positive sides, learning how things could be done we had not considered, with a few comments about ideas we had failed to communicate on playing with the rest of the codebase.

It was also great to listen to developers giving not so positive feedback on our second new joiner, as this was less than a week into development. Listening in on the trouble with structures, I'm sure the team collaboration will help people grow to the needed direction. And most insighful for me, as the team's tester was how smart it really is to look at the code when accepting and not just the black-box functionality.

I was personally looking at whether we got unit tests or not, but for now, after first increment, they are still promiseware. But the task of adding the unit tests was the first thing to start the new increment with - same practice as with the local team in this case.

When we know the people deliver and accept them, it's a lot easier with accepting what they deliver to production. Our process is such that the developer can't assume there would be a second opinion available on the side effects, but most of the testing is done as "own testing" by developer.