Friday, November 15, 2013

Unknown unknowns from another perspective - numbers and tacit knowledge


At EuroSTAR Alexandra Casapu talked about "Fooled by the Unknown Unknowns". Thinking through her story, I still think that her approach to turning her stumblings in a project into a success story, which it truly is,  shows the same remarkable characters she has exhibited as tester growing into excellence.

When I suggested that she would use her first "own" feature as a story, I did not have yet in mind what kind of a story that would make. Thus looking at the story from another angle seemed like a fun exercise.

The numbers

I collected some data points about the effort:

My first ideas in collecting this is that the story is not about a project. It's about a testing assignment within a project. It's about an assignment that did not, even at the most focused times get the full attention of the tester, as even January turns out to be just a third of the hours in that month.

The assignment is about coping with many things at once, and taking responsibility of how much time you need as a tester. This was the first assignment, where the priority of the task was such that more time could have been taken - and was, right after the feedback. The assignment is also about importance of not delaying feedback, as the decisions on going forward with the customer were pressing at the end of January.

The data points also remind me, that this is very typical in how things happen: from decision to start to customers actually starting may take quite a while. I would not be surprised that as the budgeting cycle is now at hand, we'd only now see the issues we've missed. And that we missed, in numbers, quite a significant amount of things the customers did pay attention to, and the bugs that were put on wishlist did not come back from the customers. We differentiate between bugs and change requests, where with change requests we may claim we could not have the info available, except that my other project keeps showing that surprisingly a tester may have info available about needs that are not specified anywhere. Thus I think some of those were also escapes from us.

I'm also reminded that the external push to start questioning what you actually know may not need to be that big of an effort. In this case, it was less than a working day of discussing documentation and playing with the software. And with that, the change in numbers by the original tester was already significant.

Looking at this and thinking about what happened, I think the time it takes for things to sink in also plays a relevant factor. The later bugs found by the tester were not about new problems introduced, but about new learning introduced to the tester to pay attention to the right kinds of aspects. Every round this testing seems to be getting better.

Tacit knowledge?

The talk also left me thinking about role of tacit knowledge. A point a lot of people picked up and commented on was the tester's remoteness, and being deprived of communications with the business & developers. An example of tacit knowledge that could have helped her would be that she would have seen me test something similar. Then again, I wasn't testing the same / similar, I was on completely different areas.

Looking at what was missed, there's clearly a piece of tacit knowledge that was not put forward in the discussions - in addition to the info that was told but that did not sink in as usually happens with information you have no context for yet. No one needed to tell me that this customer, who we talked about by name, is important. It's a customer with some reputation in Finland, so knowing the importance is knowledge of the local culture. In hindsight, I did not even think that providing more context on the customer than what the product data reveals could have been useful.

It will be interesting to look more deeply into tacit knowledge in the research we're starting at work. Or rather, the subjects of which we start to become at work. I still think that a lot of the issues we were facing are basic lessons new testers need to learn, and can't learn from books or courses. Thinking is multiple dimensions, using all sources in a proper series of actions so that they enable you to learn more and don't close out options. And getting feedback on how you do in real projects is really valuable. All a junior needs is a little push to become a lot better.

Every day we learn, every day makes us a little better. I love the playful competition in us learning side by side, that keeps challenging me after all these years.  

Friday, November 8, 2013

How to treat a testing contractor

With EuroSTAR 2013 behind me, there's a story to share. I got to listen a great presentation 'Fooled by Unknown Unknowns' by Alexandra Casapu twice, as she was selected to do a do-over session as something you should have seen if you missed the first time.

Alexandra works with me (or for me), and inspired by her authentic story and great improvement in testing with our product, I have my own story to tell.

As a test manager / tester in a customer organization using services from Romania, I could go with a suspicious approach. I could require that the remote tester for visibility purposes creates test cases and runs marks them passed or failed. I could require detailed notes, detailed hour reports and detailed monitoring. I could be suspicious on how she uses her time, and focus on ensuring the time it took to prepare for this presentation was not in any way invoiced from the customer. I could give her the least interesting assignments in our testing that need to get done, and cherry pick the fun stuff for myself. I could give her ridiculously small timeframes to test large features, and then complain that she did not find all the problems. I could show her no support in thinking how to make the testing she does better, implicitly guiding her to shallowness. I could require compensation for missing the bugs she missed that I had to find in the feature she talked about.

I don't do any of that. And I wish there was more customers who buy testing services that would not do that either.

Instead, I guide her work like I'd like my work to be guided. I require thinking, and taking time to reflect and make sure we pay for those hours too. I provide feedback, by testing in same areas and by reviewing ideas of what to cover. I treat her as a colleague she is, with the idea that treating every day of testing as a learning opportunity, every day at work makes us a step better. And I strive for the same excellence in testing myself using most of my time on testing too. 

There are many customer organizations outsourcing testing without involving a skilled tester on their side. These customers may set unrealistic expectations and requirements on how the work should be done, and not have the necessary viewpoint to assessing whether the testing performed by the contractor was any good. The criteria of goodness may degenerate into numbers of passed test cases, instead of valuable testing done with a reasonable effort. With the trust missing, we end up building crazy approaches to manage the distrust.

Someone asked me on my way home from EuroSTAR if Alexandra asked permission to talk about our project. My reply was, that she did not ask - I requested her to do share a real story with real details, as that is a direction that I feel testing community needs. I'm fortunate to work for a company that shares my view on this to an extent that we're having researchers watch us work on testing, and hopefully soon enough publish results about what is it that skilled testers actually do.


Thursday, October 31, 2013

Slow progress and patience - getting devs to test more

Torn between my two completely different teams, this is a story from the one that makes me impatient. When I joined - already 1,5 years ago - I negotiated on "test-fix-finalize" week where the whole team would do 'testing' - except we would not do testing, we would do code changes to make automated tests possible. We would fix bugs that we really would not want to document in our tests,. And we would add automated tests, unit-level first.

The allocation to "test-fix-finalize" has stayed, leaving a small window of exploring how the system works before it goes live. And we do need that window since we have not managed to get very good at small incremental changes or test automation on any level. Or, some might still argue that 2-3 weeks of developer work is small.

Quite recently we have moved into introducing Selenium tests. After the first examples and the discussion about "can't we just outsource this to some testers somewhere", the team has been adding those. There's been workshops, first one day and this month two days, where the whole team sits together and codes tests together. We don't do this for anything else except tests, so I'm kind of happy to contribute to that.

When adding tests, we've also been changing the product. And that again, to me, in some small scale, is a plus on the list of why I wanted the work to stay inhouse, with this team, to not get yet another round of tests that no one maintains, runs or feels any ownership of. Yes, we did that once or twice, before I joined.

Having people sit together has also other positive impacts. Like this week, while they were doing the tests and I was spending time on my other project, they started talking about how they want to do things. I get an email telling about their "decisions", where the most relevant to me was this: they decided to plan their effort so that it includes testing - well, their idea of testing - and do trade with other developers on tasks to get fresh pair of eyes on their own code. Kind of nice step, I just wish still the system testers would be part of that cycle too. But this did not exist before so its definitely progress. 

I was thinking about what actually brought this change: visibility and sharing the pain. A month ago we moved the system testing backlog into the same Jira-project where all developer work is tracked, because the time was right. And since then, the team had to see how the single tester was getting piled with three times the work that could be completed. And being the nice people they are, they are actively thinking how to help.

I wish it would help immediately, but I guess I need to work on my patience. Step at a time, never give up. If things don't feel right, there's a way to change them. Sometimes finding the way is just taking quite some time...


Monday, October 28, 2013

Reading specifications with thought

There's a rock I've hit hard too many times recently and the pain I feel is clearly telling me there's a need of changing. As part of me processing this, I thought I'd blog. After all, it's been too long.

Starting with an example, simplest out of the selection. We've been adding a feature to our product, which is about printouts. I've seen the spec, read it as thoughtfully as I did in that time, and waited a little to get the feature implemented to do some testing.

It turns out that the feature was nice in isolation of other surrounding similar features. It turns out it was nice on picture and text, but when put together with live scenarios, there's more than first meets the eye.

So I log a bug on that. We eagerly discuss it with the team and decide on a change to what we had thought previously. Implement and again comes testing...

It turns out that we missed yet another scenario. While the first issue was that printouts of two sorts worked differently on main view, the second one is that there's more than the main view and again inconsistencies that would lead us to a different design.

I'm frustrated, as I did not see these coming looking at the spec. Then again, anyone else did not either. But failing twice in a row, with very similar pattern, is just painful. And we keep failing with a similar pattern.

So thinking about what to change, I realize I'm puzzled. I could take more time to test the specifications, since I seem to have some touch to seeing the problems with the live product. Then again, someone else could do that too.

Positive side: no one in the team is initiation discussion whether these are bugs / should be fixed. Just frustrated with the inability to remove the root cause that keeps us stomping on similar issues again and again.

For anyone handling tester's issue reports - there's usually more than the one described symptom. Removing a symptom without addressing the underlying problem isn't taking us where we'd want to be.

Off to reading the next specification, with scenarios I just defined to support my thinking. Perhaps these will help the others see problems too, time will tell. Reading with thought is quite different from just reading. So many things between the lines that make us waste everyone's effort.


Friday, August 2, 2013

Types of events in Agile Finland community

I seem to be taking a habit of replying blog posts with blog posts, and here's the source of current inspiration to write: http://learninggamedev.wordpress.com/2013/08/01/join-me-in-re-imagining-the-agile-finland-coaching-circle/

I've been setting up a session to reimagine things in Agile Finland in a larger scale than just coaching circles, with 24 participant limit this time. Thus I've tried to also positioning the coaching circle in a bigger scheme of activity. The fact has been that for quite some time, coaching circles were pretty much the only activity, muddling them into "all things agile, from beginner to advanced". But it does not have to be so yet it can be.

With discussions around, I gathered the types of events we seem to bounce between when organizing:
  • circles (like coaching circle) are for practice and deep knowledge though shared insights
  • dojos and retreats and camps (like coding/testing) are for practice
  • unfacilitated meetups are for meeting people interested in similar themes, but finding the right conversation is up to you
  • facilitated meetups are for talking about a topic together in a group / subgroup, making finding the right conversation a little easier
  • peer conferences (unconferences) are for sharing real experiences and debating them for insights
  • conferences are for listening to ideas and experiences with possibility to discuss or gain experiences through exercises
  • webinars are for listening to ideas and experiences in cases where you can't be onsite
  • working parties are for creating something together. 
  • courses are for teaching a topic with exercises to practice
I placed some of these types of events on a scale of Content vs. Group size to show how different event types are usually intended for different audiences.



For Agile Finland, the Agile Drinks/Dinners have been unfacilitated, with nice groups of people meeting up, sometimes with a specific theme and sometimes not. The Dinners we've organized within Finnish Association for Software Testing have been facilitated, meaning that there is one discussion ongoing for the whole group instead of everyone grouping up to a different theme.

We have the unfacilitated / facilitated Dinners. We have the dojos. We have the coaching circles. And we have the conferences. What we've been missing is something for a beginner/intermediate audience that allows a larger crowd and that happens more regularly - a mini-conference with a facilitated discussion twist. That's what Agile Breakfast will be about, and the first one will most likely be 6.9.2013 in Ruoholahti - first Friday morning of the month. Location still needs confirming.

For the events status, I collected this: https://docs.google.com/spreadsheet/ccc?key=0Ak7e3JZGe_hadDBscGF4NEFtZm5JdnVMZEE1ZXF0NUE&usp=sharing - just to see what there could be.