Tuesday, April 29, 2014

Another day, a design concern

I tweeted a Robert Martin quote from Clean Coder -book yesterday:

 “It is the worst kind of unprofessional behavior to simply code from a spec without understanding why that spec makes sense to the business"

As a reply I got a kind note:
Thus it seems appropriate that I've spent most of today working on a spec, and feel I need to share a few experiences on that. There surely can be a lot of work embedded in the quote of avoiding unprofessional behavior.

There's a new concept we are supposed to work on by the end of this year. As concepts and our experience goes, it tends to be a good idea to play with the concept before jumping into implementing it, to avoid the "implementing 2-3 times just because we did not focus to learn early" -syndrome typical for us.

As usual, two people have had vivid discussions on building the ideas of what the concept / features would be about. They've drawn some pictures that help discuss it. Nothing is set in stone - except the fact that the business representative's calendar is very full, allowing us easy access to discuss with him once next week and then again in a month.

From my initiative, two other people start looking into the draft specification: myself in whatever role I end up with, and a UI designer in as large a role he feels comfortable taking. The quick guidance is that we should discuss after we've read it - same assignment for the two of us.

I read the document to realize that I would not advice implementing any of the stuff the way they've been sketched. The core concepts this builds on are concepts used elsewhere in the product for a completely different scope and meaning. Technically they seem to be suitable (filtering is a general idea), but the user visible concepts have implications that would make things difficult. And there's a new stick-it-on-top-of-the-others view just for this feature - while other features for same users would be elsewhere.

I hint about the problems for the user interface designer, to learn that he read the document, but did not formulate his own view on that. He has a few ideas of how to present the described concepts in the user interfaces sketched, but had not noticed the concepts would not fit our overall idea of what the product is like. We agree from discussion that he'll take another look at the document based on my heads-up.

I'd like to find a way to do two things:
  1. Build the skill of noticing when things don't fit the concepts we have in place. I think of this as seeing the system, and testing is a great place to build an overall view of what there is and how things belong together. 
  2. Learn to discuss concepts before they give the appearance of some higher-level decisions  are done. As much as I try, the specification by example -type of discussions don't seem to (yet) find their spot in the way we work. And there's more to understanding concepts than just the given-when-thens. 
Just thinking about the amount and scope of questioning my dear developer colleagues would need to go through to qualify as professionals seems a lot to ask. So there must be a context-specific choice in play on how to distribute the responsibilities over the skills we have with a nice stretch that grows our skills by taking us out of the comfort zone.









Monday, April 28, 2014

Developer skills and the need of testers

The past few weeks have lead me to increased focus on thinking about programmer-developer skills. Whatever I write about the topic does not fairly represent the multifaceted mix of skills and personalities in the real teams I reflect on, instead includes a fair bit of subjective emphasis to make some points I feel like making.  To start this, I need to mention that I absolutely love my colleagues and respect their contribution and all the positive surprises they come up with regularly, even if I feel frustrated on occasion.

I work in an organization with a long history of hiring programmer-developers. Hiring for this role is understandable, as there appears to be progress to be made with writing code. Surely to have the valuable features, a very practical transformation of ideas to code must happen. We also have had a very strong separation between the value of the idea into a concept or design without coding it and the actual coding of it into a group of product managers and programmer-developers. And product managers can test - if other work allows the time.

It turned out that focus this particular case was lacking is testing. Not the amount of it, but the quality of it. It seems that these groups did not end up with a working solution that delivers value but something that works if you don't use it for real-life scenarios it's intended for. It seems to me that a major contributor to the end result was separating the concept and design thinking from the 'just do what you're told' coding. To fix the situation, someone decided to hire a tester - but just one, as the limited budget is supposingly best used in people who write code. Later one became two, while the minimum need is four.

Just today, I opened a discussion about yet another feature that we would reimplement because it did not match the needs of the users. I listened to the arguments saying that it's not a developer job to question what the users would actually need, our job is to do what they ask and accept that they try again later - their organization pays for these mistakes that are theirs alone to make. I tried making a point that we had every chance of talking with the internal users before implementing and failed to question what was said, what we understood and what was actually needed. That discussion would be a core for us to improve as a team. Clean Coder by Robert Martin books puts it nicely: "It is the worst kind of unprofessional behavior to simply code from a spec without understanding why that spec makes sense to the business". Same goes for writing the spec or testing - any activity that requires you to think.

Another discussion I had recently was on the need of having more people who actually see how value is generated and notice issues that threaten that value. That discussion ended up with fixed budget of these people and the idea that one would need to leave before another, with a missing skillset, could join.

So I roughly categorized closest programmer-developers in the effort and need of testing they create:
  • Architect-developers with experience. I love working with these types, and wish all devs were like this. They own the architectural design choices and strive to understand what is needed. And often are given a position that enables them to perform as in talking directly with the customers. Time taking responsibility over same product / similar solutions show in the end result. It doesn't always work, but the surprises that issues bring forth are considered in scale, not as individual symptoms. 
  • Ones with potential. These types tend to be young ones that have not yet learned an unproductive role of giving up on improving. They might not know how to best learn, and try to deliver what was asked - obediently. As a tester with these types, there's often many surprising connections of features that you get to show. Instead of teaching these types to expect that from a tester, getting to their potential it may be important to teach them how to learn, together. 
  • Ones without a system/value view. These types take what is asked and focus on implementing. If the product doesn't help with it's intended purpose, it must be someone else's problem. These types think it's normal to implement the same feature three times  just so that you don't have to talk with people with a different mindset. They accidentally waste a lot of effort but refuse to see better ways they could themselves contribute to. Testing for these developers is critical pre-implementation, to catch the expensive mistakes. And there's a fair share of post-implementation testing too but it appears the connections are not made and the code degenerates as fixing progresses.
  • The sloppy ones. These types seem to be programmer-developers because they can write code, but not because they're good at it in the criteria of good I use. I tend to associate structural code within the object-oriented paradigm or the infamous 'whole program in the catch-clause of try-catch' -types of choices here. But the worst part is with fixing to hide symptoms - to generate more problems. As for testers, these guys make you feel needed as without you it never works. But testing here is just an odd choice if no learning happens. Perfect work generators for testers.
With people like this, 1:10 ratio of testers and developers seems off. Then again, fixing quality should perhaps start with making the unwilling willing again - stop limiting smart people with artificial role boundaries - and especially making the unable able. I'm sure people as smart as these developers would be capable of more. 

I could still use a little bit more of skilled testing in these teams. Empirical evidence is powerful also in organizing for the support people need to grow with developer skills.

I might also hope that the ones who are like taxi drivers who can't find even the sightseeing locations without a map reader (thanks to Michael Bolton on the metaphor)  would get paid significantly less. As it comes to pay, I find that some should pay for the trouble they cause instead of getting paid regardless of output that is executable code. Measuring value and contribution would be something I'd like to work on.









Thursday, April 17, 2014

Nominated for the Finnish Tester of the Year -candidate

It's the time of the year when nominations for candidates for the Finnish Tester of the Year -award are out, and I learned I'm on the nominees list: for the 8th time. Yes, the award has been  given seven times before, and I've been nominated as candidate every year. And every year someone else has won.

The competition has a rule that you are ruled out of nominees if you have won. And I feel sometimes just a little frustrated on the post-vote comments that many people assumed I must have already won once, since I deserve to win.

I wanted to roughly translate what was said about me by an anonymous this year as basis of nominating me:

"Maaret Pyhäjärvi brings forth testing nationally and internationally. Last year e.g. in EuroSTAR program committee, as the architect of Helsinki Testing Day, organizer of several application development seminars, Agile Finland board etc etc etc. If activity was the measure, Pyhäjärvi would have deserved to win the title every year since 2007. This woman breathes testing!" 


That's beautifully said about me. I feel honored, even though I did ask all people close to me not to put me on the list again, I missed someone or this was written by someone who did not take my request seriously. And it was someone who put an effort into writing an updated description of what I've been up to.

The rules say that a Finnish Tester of the Year candidate is someone who:
  • has inspired colleagues or other organizations in increasingly better testing
  • brought fourth ideas and trends from the world into Finnish testing
  • has positively impacted the birth of a testing culture in their own organization
  • has influenced the results of testing activity (test coverage, issues found etc) in own organization or in the community
  • has done test-related innovations, rationalizing improvements or created new ways of doing testing
  • has had impact on the birth of testing as a profession in Finland
  • has influenced finnish testing culture and tester profession development positively
  • OR in other ways improved the abilities to do testing
You could vote for me to help me get away from this honor of being nominated every year: http://digiumenterprise.com/answer/?sid=1167464&chk=BQAVX7AV





Hands-on Testing is a Valuable Test Management Practice

In my previous post, I emphasized the non-testing activities I do to contribute in my team. Reading it through a day later, I realized it contributes to the idea that many people around me have on not understanding what testing is. When I emphasize the other stuff, I implicitly leave out the testing work other than bringing in the mindset of critical thinking and value-orientation.

I wanted to share a story of how it is important, in my experience, to do hands-on testing and not just hang out and contribute by talking and helping others.

At a point of my career, I was working in a customer-contractor setting in a multimillion project. I was assigned the role of a test manager, which meant a lot of meetings and a mix of planning and testing. As I was on the customer side, I had a tiny part-time acceptance test team to work with, on a huge and very complicated system. After all, this was the last bit of testing after all the contracted layers.

With reading the specifications and trying out a couple of scenarios, we learned that the system could possibly work as specified, but not fulfilling its purpose. It was a data processing system that gathered data from various external sources and put it together to sum up a financial decision, with very complex logic. If the financial decision was incorrect, the system was of little value.

As a test manager, I was at a choice point I did not realize I was at. I implicitly chose to talk with stakeholders more, to find a way to communicate the first lesson and with the context at hand and the structures of the contractor, it was a major effort. While I chose to invest my time on the communication aspect, I chose not to test so much personally. I could not be at two places at once. My tiny part-time team tested and I supported them, but as I was not testing myself, the hands-on testing effort was very small.

I remember many combat-like scenarios of discussions in the various committees with each party finetuning the arguments. But there was no common goal. The contractor goal was to deliver in schedule what had been specified. The fact that specification would be incorrect would not slow down that train that would pay them the money. And there was no better specification - as I learned later, acceptance testers needed to hands on test how the system would behave to know what they needed to specify. One experience in particular was such a scene I still laugh thinking about it: I had two colleagues in test kicked out of a board meeting with a minute warning time because they were employed by other contractors, on the premise of "business secrets" - just as we were working on an expensive decision that would not be as positive for the contractor if the test managers would be included. The politics drained a lot of energy.

Looking back at all that energy, it was wasted. With an explicit decision of myself testing - with little budget on testing, the hands-on empirical information would be more valuable than the politics game. The hands-on empirical info would have helped in transforming the politics into something we could actually act on. I know it would, since someone else, with a different resourcing model, had both parts, and the testing results part changed the game.

The lesson I learned is one that Elisabeth Hendrickson emphasizes with the phrase "Empirical evidence trumps speculation. Every. Single. Time". The real information from testing is important. Theories are only theories. Regardless of the test role I feel I'm assigned to, it's still testing, not theorizing about testing that could be done given enough resources. There's always a choice and it comes down to me making the choice.

I learned to choose to test. Hands-on, yet incomplete is better than the plan and communication. Testing to get actionable examples of things we should know of is valuable and game-changing information in the projects.


That's what I spend most of my time on. Knowing from personal experience what works and what not. All the other stuff I can contribute on in the project comes from that core: disciplined hands-on testing to transform theories into empirical evidence.

Tuesday, April 15, 2014

Frustrated on the silo mentality and working solo

With 17 years in testing and intensively learning about pretty much everything on software development is (positively) mixing up my duties at work. I tend to contribute on pretty much everything we do, and try to speak for the idea that others could too.

To take a break from testing, today I contributed summarizing our product roadmap - just to make it visible what we've already talked about that needs doing in a visual format. And I initiated a meeting to collaborate on user interface design. The latter left me thinking, which turns into writing.

A little over a month ago, I was asked to test a new feature in the product. The feature was a search and with the first minute looking at it, it was obvious to me that it was close to "as designed" but nothing close to what the users were likely to need from it. With a history of failing this way a few times before (and also succeeding not to do this a few times), I felt frustrated. I obviously had missed the fact that we started implementing this based on a user interface design that just would not work - again. I could have helped, I could have contributed, but I had not. So again, I was in a position to tell how it would not work.

The message was taken well, but I was still puzzled. I asked around to ask if anyone had been helping the developer with the feature. I learned that our project manager had contributed the user interface sketch in efforts to explain what the feature could be, and that the feature had been reviewed by our user interface specialist. The developer had suggested a couple of relevant improvements, within the limits of the design he had presented. I talked with the user interface specialist, to learn that he had reviewed it as in "yes, that's a user interface sketch" and with all the other work, passed it on without paying much attention.

As the design had already been implemented, I took on more time testing it than the first minute. I learned through testing that not only the design was inappropriate, the implementation was also quite buggy. With the worst possible experience, I logged the bugs and just checked to see the issues are still waiting in the queue with no progress what so ever. Not fun for anyone, I'd claim.

I got back to this experience, as I managed to volunteer myself to participate in a "managerial" meeting with the business users end of last week. Listening and asking questions, I learned that my perception of "this will not work for the users" was indeed the case. Taking the lessons from that discussion, we had another one within the team today on collaborating with the user interface design.

We looked at the existing design, which focuses on showing the database fields to user as separate search criteria. I brought in a post-it note I had sketched on what I thought the users needed while I had tested. We had a discussion, and agreed to leave the user interface designer to work onwards with it with some quick sketching.

A few hours later, we had another design. We reviewed it together and I contributed both ideas of how the design would fail with the users and what I saw as it's strengths. Five minutes later, we had yet another design that looked brilliant. I quickly still went through the use scenarios in my head, testing it, to realize we had dropped a significant requirement. With additional discussion, a few minutes later we realized that the current design could easily be extended incrementally to the missed requirement, and that there would be a lot of value for the users to just do the first part first.

To my surprise, the feature with the second part included, looks exactly like my little sketch from testing. I guess we're now more in synch as a team, but it took a lot of calendar time and effort, leaving me hoping for ways to accelerate learning with better on-time collaboration.

My team has a product-owner -type of a project manager, a user interface design specialist, developers and myself as a testing specialist. I don't particularly like the idea that removing the testing specialist, so much of our results crumble. It's not because a tester is needed, it's because someone like myself is needed. I go through what I thought, what I learned and how I felt. I work through a lot of ideas on how I could help us succeed, how I could contribute, how I could help others learn. I can change what I do, and hope others will pick up stuff they find relevant - and learn to find the end goal relevant.

Occasionally, like today, I feel the amount of "not in my job description, somebody else's work" is overwhelming. And on days like this I wonder what is the experience that transforms the silo mentality to something a little more productive. I want something better, and I'm sure others do too. There's something wrong with the system that causes us to shy away from delivering value together to play with our individual tasks. So, more collaboration, more doing together. I wonder when the initiative is on someone else and not assumed to be "in my job description" - as I have no such thing, by choice.

Friday, April 11, 2014

Tester could help not waste developer and user time


I find it funny that I get upset for calling developers bad testers, but reading James Whittaker's latest post http://blogs.msdn.com/b/jw_on_tech/archive/2014/04/11/stop-testing-like-its-1999.aspx just feels so out of context for me that I don't even care.

But it does prompt me to write a post, that I was thinking of before reading it, on something I learned at work today.

I had been invited today to meet up with a team of business users that use a particular area of the product we're building. My invite there was the first with that particular group, and the meeting was long due, postponed for various reasons.

The meeting opened up with the business users apologizing for their lack of time for us - for the meeting, for using the product, for providing feedback if they ever found time to use it. They were swamped with other work, and this product while useful, wouldn't be their only way of getting the job done.

We discussed the needs for upcoming development work, and I asked many questions to understand their needs better. Others from the development team seemed focused on listing features, negotiating their order and coming up with a design the business users appear to accept. Digging to their actual needs, we completely changed the ideas for that area. As the discussion happened late for one part, reimplementation would happen. But on two other parts, we feel more comfortable on the likelihood of making the feature helpful for its purpose.

Then we talked about getting the product into use to get feedback. As I briefly summarized what we had covered on our testing in the team, I heard the sighs of relief that the interruptions to their already busy schedules might be less than their used to - without a tester around.

It could be that our developers are not quite as updated as ones James Whittaker seems to talk about. At least our users are smaller in numbers, and very much bothered when they get to be used as testers, regardless of the ease of reproducing and pace of fixing. Bugs make them use time on something and that time would be of more value used elsewhere. Context matters.

The testers of today might have different set of skills - even without coding - than the testers of 1999. I find myself to be a catalyst that help us start some of the difficult conversations that we finish together with a better understanding. And have noticed many others identifying themselves as testers to bring in a similar approach. Back in 1999, I would not have expected all testers to be active explorers. Nowadays, the button clicking robots who fall asleep arriving to work still exist somewhere, but deserve to go extinct. James Whittaker did not seem to talk of that crowd, though. But I get the feeling he thinks of the types that insist on all bugs being equal and not learning what information is valuable. Perhaps that's why I felt disconnected reading the post - there's another species of testers I don't even relate to?




Thursday, April 10, 2014

Continuous releases are a way forward even without automation

Tomorrow is a scheduled release day for one of my products. The scheduled release day approaching creates interesting behaviors: a product management team member requested for no testing today, or the release might be postponed. It feels easier not to know of problems.

Coincidentally, tomorrow is also a release day for the other one of my products. It wasn't scheduled, but as we work with completing a feature at a time, it just got completed so tomorrow it will go out.

The same day as release day left me thinking about the two different approaches, and how much of a difference there can be with one relatively simple change. The first team uses Scrum and sprints with a scheduled release, the second team uses Kanban and releases whenever features are ready, with emphasis on making the features smaller to flow through more fluently. Neither one of the teams has test automation to a relevant degree.

The first team completes development on main, and does fixing after the sprint in a branch, while already working on the next release. The test-fix tail is scheduled to be about two weeks and yet always runs out of time postponing fixing. There's a lot of changes all around, and no change to test those within the schedule. Every day close to two weeks we just hope the testing does not find anything on time, to make the release - still realizing that testing was not done.

The second team completes development on a branch, and tests and fix with focus on time through the process. When development (and testing and fixing) are complete, the feature is merged to main and a bit of final testing is done. We measure the times in different stages and realize test automation would make us faster, and schedule a piece of automation as the feature to complete every now and then. The second team was like the first team just less than a month ago. It's amazing how big a difference that makes.

I love the fact that I can test with the second team continuously. The approach allows us to create a steady flow of features, whereas the sprint-type of model drove us into starting things that we barely completed - leaving most of the testing and fixing for later.

I see two other things that could have helped us:
  1. Learning to build small things of value -- this is still on the list, but is making much slower progress than I would hope to see
  2. Automating testing to a degree where the testing tail will be much much smaller -- which seems still hard to arrange enough time for, with all the legacy (testless) implementation
We tried both before going for "continuous (manual) deployment". Now to remove some of the manual work, one item at a time. 

Sunday, April 6, 2014

Bad testers and the power of words in turning things true

Here's a twitter-inspired post. A visible discussion seems to be about anyone as tester, and I felt I need to also write down some of my thoughts. For the inspiring originals, see:
I found it interesting, that I got a feeling of being offended from the title Jari selected - adding the "bad" qualifier - whereas the other one saying anyone can be a tester did not offend me. Reading the texts, the first one had all the potential to annoy, with the style of remarks. I found myself feeling more sympathies to the message saying that anyone can be a tester. I've dedicated all my career on being a professional tester. But the troubles I keep experiencing are not about my skills, but about our common attitudes and skills in testing in the teams I work.

The organization I work for realized they could use a professional tester after 20 years of developing software without a dedicated / skilled tester. As I joined and pair tested with people, I learned a lot about attitudes. A developer told me that he is "too valuable to test as he can code", completely missing out on the fact that the people who he deemed less valuable were the direct source of incoming money paying our salaries dealing with sales and other customer-facing support, and time away from that work isn't what the organization needed. A product manager told me he hates testing, because no one likes the results and it's always stuff to do on top of all the other duties. In general, I entered a place where everyone was looking for excuses not to test.

As I started my work, we soon learned there was something I do differently to see problems that others miss. And as I was deemed 'good' at testing and everyone else got confirmation they were 'bad', the situation just got worse assuming all testing should happen in the realm of 'good' - why waste time on doing the work with 'bad' skills.

I've spent a significant amount of time emphasizing that everyone can test, and everyone should test. Some of us are better at it, but every one of us can get better at it given the time and focus, and believing it is possible. I would hate the idea that my developers and product managers would hang out with testers who emphasize how they are 'good" while others are 'bad'. Realistically, I'm not good at everything in testing. I get better by actively learning, and learning happens often with people with diverse background. Setting a label 'bad' on some testers becomes a self-fulfilling prophecy. I'm bad, I can't get better, why bother even trying? I feel that in software development, we need to put effort into removing excuses of not thinking about flow of value from idea to use, and emphasizing we absolutely need dedicated testers to do testing has been one of those excuses. Could we emphasize the need of professional testers without making that the center of the message?

Everyone needs to be a tester. We need everyone hands-on, empirically experiencing and thinking from different perspectives. Some will be better at it than others, but only through doing we improve as teams. We can't ask developers or product managers to distance themselves from testing as there's someone else better at it - and often later in time. Time is important. And attitudes count. Simple bugs are simple to be found. Many of them could be spotted if non-testers would feel they need to test while working through the requirements and implementation. I hate the experience of simple bugs being caught over and over again by the professional tester, just because others don't care about testing.

As for professional testers, some of us enter the cycle of realizing they are good at it, spending time on it and getting better. We might also learn that its fun, needed, important, useful and valuable work to provide information on quality. As professional testers, we enter new domains (at least I do) and learn the domain in layers, to get from simple quicktests to in-depth understanding of customer value. We learn this from domain experts, with significant effort on learning, but in a style where we can already test before we're fully into all the aspects of the domain. Domain knowledge does not come automatically just because I'm a professional tester.  The domain experts could make better testers of that area while I don't have the domain knowledge. But, they also tends to have other duties that stop them from using their time of testing, without that making them 'bad' testers.

Here's an example of building the domain expertise. About three months ago, I overheard developers saying that my team's remote tester could participate on a new feature, but it would be June before she is able to contribute anything useful, as the domain of that area is pretty complex. The domain is energy - electricity, water consumption and the kinds - and the tester has absolutely no background in the domain. However, she has a background in mathematics. She has a curious mind, and ability to create models, deduct and ask for information.  On the first month, her testing would see simple things. But after three months, everyone in the team acknowledges her as a valuable member, saying they could not get this done this way without her. There was no option of asking all the others to learn hands-on with the product as much (testing) - we would have needed an extra person's effort on the team anyway. The product managers could have done what she did. But there was no product manager with the info and enough time available. But there was a good and skilled tester available. And we should be glad there was. It's not just skill, but it's also the ability to use time to build that skill to the most recent details.

Everyone can and should test. Not everyone is equally skilled in in. Even the professional testers are different. Every day is a learning opportunity, for all of us. And with that idea directing me as tester, I am better, every day. Saying someone is 'bad' is unfair and cuts down motivation for those who need to learn. If professional testers get offended with the idea of being unneeded, are we supposed to reply by attacking so that others feel as bad as we do?