More info on me

Wednesday, January 22, 2014

How to Insult a Tester

This post is about things that happened recently, flavored with a bit of attitude of both not believing what people say to others and laughing at how common these issues are. Put your dry humor glasses on as you read on...

There was a meeting to discuss testing with the tester, tester's personnel manager and tester's project manager. The meeting all in all was productive and took place in good spirit. Yet, it was an exemplary case of How to Insult a Tester.

Idea 1: Blame the tester for missing bugs that caused trouble in production regardless of knowing 1/3 of recommended testing was done

The meeting started off with a discussion about production problems a week before that were humiliating and should have been found. We all agree that they should have been found. But we don't agree on why they were not. As the tester, I think they were not found because they were not tested as there was not time to test those things. They were on the list of things to test, the list that none in the team can help with because they need to do other work.

Idea 2: Explain that you want Monkey testing

The project manager suggests that he would like to get more of "shallow testing" all around the product to find all the problems and that the problem is that what happens is "too in-depth". He would like tester to find all the problems in shorter time. What would be the style of testing needed is some hours on every area of the product, in a case where every area is so big to test that it would need a lot of time. Project manager continues by explaining that he wants more of monkey testing, just clicking around AND that would bring out all the bugs in a shorter time frame.

Idea 3: Explain testing belongs with the tester but developers can't help because other stuff is "more important"

The discussion continues with the 10:1 ratio of developers and tester, non-existing test automation and huge amount of features that could break with system level changes, like ones seen as problems last week. As a particular example, raising the point that testing efforts planned in the common planning have been 3 times the effort of skilled testers available, and that as a team we don't react to testing not getting done due to lack of resources. End of this: developers have so important features that just must be developed, that there is really no other choice but to get all the testing done by the tester. To soften this a little, suggestion is to try and get permission to hire/contract a second tester, but with reservation that most likely that will not be possible because "product management promised they can do monkey testing themselves". End this with "you can't work on the more important stuff coders work on, because you need to test". That end bit was from a previous discussion though, not this one.

SUM THIS UP
Dear project manager, please: 

Stop blaming the tester for the bugs in production, if you refuse to invest in the testing needed to find the bugs - in particular, since the bugs would not be there if we did not create changes so hastily under stress.  I did not put the bugs in. I did not help removing them either, this time. But I did help remove quite a bunch of other bugs with the choice of how to use time you did not question beforehand, even though you were asked to.

Stop calling the work with complex results of all the relevant bugs Monkey testing. That is insulting and does not make anyone more interested in delivering the results you actually look for.

Learn to think about delivery as a whole team effort instead of working to isolate really collaborative developers and testers into separate silos with the idea of efficiency that just does not work. Let us share the work and let us take in less so that we can actually run the task through. 
 

Thursday, January 9, 2014

Follow through - it just might pay off

I was reminded on a lesson I've learned earlier in my testing by something that happened today.

Towards the end of the working day, I noticed a bug report being just now marked resolved. With my new found powers (I control the builds for myself, finally!) I was keen to test the fix right away. I checked the comments to find out that no fix had been made. Instead, there was a comment telling that this is related to a behavior in the program we plan on fixing later, and actually is exactly the same problem.

I talked briefly with the project manager, who knows the product in nice detail, and he almost convinced me that the symptoms I had reported could be related to that. I asked quite many question to clarify how the feature should work if this was indeed the explanation to what I was experiencing. Joking over the low wall of my cubicle, I told him that he must know I will not take his word for it, but I will need to see it by testing it. I felt that was probably a thing for me to experience. And perhaps I could notice something else while checking that - it has been a while since I've looked at that particular scenario.

I decided on the minimum test that I would need to do to find out if it works as specified for now even if it doesn't work as we'd like it to work in the long run. And, also a little to my surprise, I learned that while they had been looking at component A, the problems causing the symptoms were likely to be in component B and the problem was real. I was able to add better isolated steps and explanation to the issue.

What this little piece reminded me is:

Follow through the stuff, even in cases where the explanation sounds plausible, theory is just not the same as trying things first hand.

The style of discussion with the project manager was also very positive. He never suggested I might be wasting my time. We both smiled and joked about it, both before and after. And we felt a common sense of accomplishment now that we understand that just a little better and know where to dig in to find the thing we need to fix. 

Tuesday, January 7, 2014

Do you really think finding YOUR bugs is fun?!?

As a tester, I provide a service. As a person, I provide this service because I enjoy it - often very much. There's a but - it seems some developers have the ability to take all joy out of testing work. I had an experience like that today.

I reported an issue on a feature we had just been working on, that had a similar consistency within product issue fixed since last week. The fix moved the problem elsewhere, and I thought to be helpful, and let the developer know of the problem.

The first issue had been raised by the team's architect and the developer gave up under pressure on his idea that while for the rest of the product we aim for polished and finished experience, on his first feature on this product / technology we would not.

The reaction to the issue I raised probably got all the steam that had been building under the hood, and the response was much stronger: fixing this would be a waste of his time. I explained, that if we, as the team, indeed think that this type of an issue is waste of time, I'm happy to not report issues like that. But as far as my experience with the product, we have had the habit of fixing these issues and I'm happy to change that practice, but the rules should be for the product - at least similar features within the product. He agreed on a compromise from his point of view: if our GUI designer thought it should be fixed, he'd fix it. He would prefer waiting to see if real users see that as a problem, implying that what ever I did was something a user he can think of and cares of would not do.

An hour later, the GUI designer had taken a look at the problem, and said it should be fixed. So I suppose it will.

There's testing that is fun - creative, intellectually challenging and that makes me feel useful. Fighting about information that isn't wanted isn't exactly fun. Finding same problem step after step in the effort of fixing a problem instead of the mentioned symptom isn't exactly fun. One skill that I feel I've built over the years is to try to find the fun way of doing the things where developers see no fun - much of the absolutely necessary testing falls into that category.

I remember many testing courses and conferences, where I've been educated (and educating) on how to be considerate towards the developers. How to report bugs in a non-judgemental way, how to discuss things face to face rather than resorting to passing messages through the bug tracking system. Today I'm wondering about the extent that kind of skills are taught to developers. I'm trying to provide a service, but my service usually does not include accepting insults, belittling or swearing. I find it amazing that there still are people who might think that this would be acceptable behavior. Luckily, my two examples are both older. I can put my faith into the future generations of developers, who treat colleagues in teams with respect, right?

Another lesson I'd love to see more of my developer team mates catching is that while comparing an hour of end user time and team time, we may actually see more value for the end user time if it is not used on serving our needs. The end user might actually be able to do his job - like selling our product - if he wasn't always busy letting us know that he could not really do what he was trying to do. And seriously, users are not necessary happy even if they don't complain. Workarounds allow them to do what they tried, but is that really what we wanted to offer, a workaround to get the thing done that we were trying to implement for real?  Who wants to drive their software development with the idea of "let's just react to problems when users see them"? I hope that too is a thing of the past.



Monday, January 6, 2014

Make presentation selection difficult for EuroSTAR 2014

I had the pleasure of joining Michael Bolton, Bart Broekman, Alan Richardson and Rikard Edgren in EuroSTAR program committee last year. I did not realize I was the only woman in that group - as gender has never been an issue to me - before we discussed gender as part of the criteria to fill in the keynote slots. That discussion was quite painful for me, as it left me in months of questioning (I never asked!) whether I was chosen to program committee on merit (which I know I have and shouldn't question) or by gender. I made a commitment to myself never want to experience that again, but with help of three amazing women I managed to put that idea past me: Ru Cindrea helped me focus my energy on the positive, Fiona Charles helped me see that outspoken criteria might not be the most relevant in changing state of affairs and Alexandra Casapu put the whole thing past me sharing that while I needed to suspect my gender as selection criteria, she had been told so on occasions.

I strongly believe people should be speaking conferences because of their merits, not because they are the appropriate gender. If you have a good story and you can deliver it well, if it teaches the listeners something that they need and want to hear, it should be on the slots. To create a conference with great presentations, you need a variety to choose from. You need a bit of healthy competition, striving to make your story relevant in contents and wrapping, and that's what the call for presentations tends to be about.

We made a commitment in EuroSTAR 2013 with Rikard Edgen, that both of us would strive to make the selection of the best paper award challenging for program committee of 2014. Nothing to spice things up like a positive sense of competition - where we can help each other to make the selection hard as both would be polished and relevant. So I better get on with my work on the talk and then the paper to be on the competition.

So, please take this as an invitation to join me in making the work of the program committee 2014 hard. When there's too many great talks suggested, it takes more to build the program. But it also creates a conference that takes us forward. I would hope to see a mix of consultants and non-consultants. But in particular, I'd like to see that gender is not raised as a criteria with the abundance of both genders in the proposals. For that, I'd hope there will be many and again many women submitting - just as there should be many and again many men submitting. There's equality in testing, let that be visible in the submissions.

If I can help you with your proposal, let me know. I'd be happy to. The testing field needs its newcomers and its seasoned speakers alike to bring in new and relevant contents. I've had the privilege of introducing dozens of new speakers in Finland, stemming from private discussions with just a little of encouragement to share the great stories. We all have what it takes. Let's just share, in call of presentations, what we're into sharing right now. The good stuff needs to be out in the open. 

Friday, January 3, 2014

When bugs feel too simple

I've spent today testing a feature reading its specification. I let the specification reading bring ideas into my head, and follow them - with the result of logging quite many bugs.

One bug in particular was a tipping point today on how I feel about my usefulness.

I was reading the specification mentioning copy feature, to realize that I've tried the feature but I have not really owned the data I'm copying, that is, intentionally creating it to be something that I would find relevant for this copy action. So I created the maximum data (filling in all the fields) with data that I could recognize (naming everything so that they tell about their location) and hit the copy button.



With some more tests, I could pinpoint the piece of data that was causing the problem, and the problem is not about the contents of the data I entered, just about me filling anything in to a particular field.

With  this bug, I could not help but to feel frustrated. A very simple way to see if a feature works is to use it. And if you will use it, why not use it with full data?

This example is from a team with me and six developers. I'm a part timer on the product, developers are full timers. And yet we, continuously, over and over again, regardless of all the discussions about developer testing end up with this: evidence of not having used the feature.

I went digging into the bugs I've logged in the last few weeks, and came to a very depressing conclusion. None of the bugs I've found required any more sophistication than this one. All it takes is a little time with the product.

Need to start doing something about this. Again and more. This is not a skills issue. This is an attitude issue. Developers can test this. For a reason I don't really get out of my team in discussions, their testing for these issues just does not happen often enough.