Monday, April 28, 2014

Developer skills and the need of testers

The past few weeks have lead me to increased focus on thinking about programmer-developer skills. Whatever I write about the topic does not fairly represent the multifaceted mix of skills and personalities in the real teams I reflect on, instead includes a fair bit of subjective emphasis to make some points I feel like making.  To start this, I need to mention that I absolutely love my colleagues and respect their contribution and all the positive surprises they come up with regularly, even if I feel frustrated on occasion.

I work in an organization with a long history of hiring programmer-developers. Hiring for this role is understandable, as there appears to be progress to be made with writing code. Surely to have the valuable features, a very practical transformation of ideas to code must happen. We also have had a very strong separation between the value of the idea into a concept or design without coding it and the actual coding of it into a group of product managers and programmer-developers. And product managers can test - if other work allows the time.

It turned out that focus this particular case was lacking is testing. Not the amount of it, but the quality of it. It seems that these groups did not end up with a working solution that delivers value but something that works if you don't use it for real-life scenarios it's intended for. It seems to me that a major contributor to the end result was separating the concept and design thinking from the 'just do what you're told' coding. To fix the situation, someone decided to hire a tester - but just one, as the limited budget is supposingly best used in people who write code. Later one became two, while the minimum need is four.

Just today, I opened a discussion about yet another feature that we would reimplement because it did not match the needs of the users. I listened to the arguments saying that it's not a developer job to question what the users would actually need, our job is to do what they ask and accept that they try again later - their organization pays for these mistakes that are theirs alone to make. I tried making a point that we had every chance of talking with the internal users before implementing and failed to question what was said, what we understood and what was actually needed. That discussion would be a core for us to improve as a team. Clean Coder by Robert Martin books puts it nicely: "It is the worst kind of unprofessional behavior to simply code from a spec without understanding why that spec makes sense to the business". Same goes for writing the spec or testing - any activity that requires you to think.

Another discussion I had recently was on the need of having more people who actually see how value is generated and notice issues that threaten that value. That discussion ended up with fixed budget of these people and the idea that one would need to leave before another, with a missing skillset, could join.

So I roughly categorized closest programmer-developers in the effort and need of testing they create:
  • Architect-developers with experience. I love working with these types, and wish all devs were like this. They own the architectural design choices and strive to understand what is needed. And often are given a position that enables them to perform as in talking directly with the customers. Time taking responsibility over same product / similar solutions show in the end result. It doesn't always work, but the surprises that issues bring forth are considered in scale, not as individual symptoms. 
  • Ones with potential. These types tend to be young ones that have not yet learned an unproductive role of giving up on improving. They might not know how to best learn, and try to deliver what was asked - obediently. As a tester with these types, there's often many surprising connections of features that you get to show. Instead of teaching these types to expect that from a tester, getting to their potential it may be important to teach them how to learn, together. 
  • Ones without a system/value view. These types take what is asked and focus on implementing. If the product doesn't help with it's intended purpose, it must be someone else's problem. These types think it's normal to implement the same feature three times  just so that you don't have to talk with people with a different mindset. They accidentally waste a lot of effort but refuse to see better ways they could themselves contribute to. Testing for these developers is critical pre-implementation, to catch the expensive mistakes. And there's a fair share of post-implementation testing too but it appears the connections are not made and the code degenerates as fixing progresses.
  • The sloppy ones. These types seem to be programmer-developers because they can write code, but not because they're good at it in the criteria of good I use. I tend to associate structural code within the object-oriented paradigm or the infamous 'whole program in the catch-clause of try-catch' -types of choices here. But the worst part is with fixing to hide symptoms - to generate more problems. As for testers, these guys make you feel needed as without you it never works. But testing here is just an odd choice if no learning happens. Perfect work generators for testers.
With people like this, 1:10 ratio of testers and developers seems off. Then again, fixing quality should perhaps start with making the unwilling willing again - stop limiting smart people with artificial role boundaries - and especially making the unable able. I'm sure people as smart as these developers would be capable of more. 

I could still use a little bit more of skilled testing in these teams. Empirical evidence is powerful also in organizing for the support people need to grow with developer skills.

I might also hope that the ones who are like taxi drivers who can't find even the sightseeing locations without a map reader (thanks to Michael Bolton on the metaphor)  would get paid significantly less. As it comes to pay, I find that some should pay for the trouble they cause instead of getting paid regardless of output that is executable code. Measuring value and contribution would be something I'd like to work on.