Sunday, February 22, 2015

A lone tester's guide to her developers

Agile... the best and the worst thing that has happened to my professional life. The best in the sense of what we can create iteratively and incrementally focusing on value, collaboratively. The worst in the sense that so much of my energy goes into fighting for my right to exist and feeling isolated, different.

Here's what I keep hearing around agile:
  • We just automate all testing - there is no testing we would not automate. 
  • Only non-programmers in or close to the teams are the product owners. Testers must be programmers (because automation is all there is).
  • We can just release and monitor production and fix quickly (there's no other information we can recognise testing would give)
  • We don't recruit testers anymore at all and no one else should either
The numbers of testing specialists working with agile teams have been clearly going down over the years. Some people have become generalising specialists, working on heavily on test automation. Others have changed careers away from software. There's again fewer women working with software, as testing was the specialty with a lot of women within software. 

I kind of like the ratio of 1:8 of non-coding testing specialist (well, coding-avoiding would be more accurate) to coding developers in my team. Looking at the big picture, I wouldn't add more testing specialists, although I wouldn't have a team completely without one either. I wouldn't add more testing specialists in my team since one can already find so many problems that the whole team cannot find enough time to fix them, I would add people who fix (and would not break) things.

But the ratio has a downside: I've never felt as alone.

I'm social, and I hang out with my team. I listen to them getting very excited about technologies (SignalR is so cool, sure...), I participate in discussions about designs and value to deliver. We work great as a team but I'm not satisfied.

I miss colleagues that would be not only listen to me getting very excited about testing and bugs, but would actually share my excitement. I share my excitement with my team, only to see the same sympathetic look in the developers eyes that I have when we talk about cool technologies. So I go look for other testers that want to talk and share as my means of coping. I participate (and organise) peer conferences on testing. I go for drinks and dinners with my testing colleagues from other companies to share stories. I consume (and create) a lot of testing content online and in conferences. And I realise I'm doing this to cope and to be happy with work, not only to share what I learned as I did in the past.

Whenever testers meet, a common topic is tester-developer relations. I don't see anything being particularly wrong with my relation to developers, we are just interested of different things while sympathetic to the other's viewpoint. I again reread James Bach's A Tester's Commitments (to Developers) -blog post, and starter missing a developers guide to testers. As the single test specialist, the only one who is different in interests (and the only one that happens to be a woman in this team), it would be great if all the advice isn't about how I deal with you, the developers, but also the other way around.

Here's my lone tester's guide to my already wonderful developer colleagues, draft 0:
  1. Recognise we have different, valuable viewpoints into creating our product. You don't want to think like me and I don't want to think like you, but together we're better. Continue to listen sympathetically, just like I do.

  2. When I tell you about a bug, keep taking it as a positive thing, a puzzle we're solving together. Don't leave all the sugar-coating work to me, assume I mean well when I'm happy that it does not work quite as we wanted it to. As a stretch, try seeing things my way: finding information we did not know of it kind of cool and we could all together celebrate the fact that it was found and can be fixed.

  3. When the product works in production, I let you get the fame. You get the thanks for delivering a product that works, I don't deliver any metrics that would make us adversaries to emphasise my contribution. It would be nice that in return you actively recognised my contribution, even to a degree that you would talk about it outside our organisation for people who think that my kinds are not needed.

  4. Actively pair with me. Pair on your code. Pair on my tests. Pair on test automation. Pair on splitting the features. Don't always leave the invitation to pair as something I need to do. While I'm only sympathetic about your excitement towards details of technology, there's a lot of things I'd like to participate on without turning into a programmer.

  5. Deliver me software you think that might work. If you did not test it yourself, you're not making me feel special by noticing simple things, you're just wasting time for both of us. If you know of problems, tell me and don't keep secrets just to tell me later that you "checked if I would notice".

  6. Share with the things you have learned and what you considered difficult in creating our product. You know I will always ask for things like that, volunteer that information, help me work less to get inside your heads.

  7. Go with me to testing conferences and meetups without me forcing you to do so. Be the rare developer who shows interest in understanding testers. Listen sympathetically and seek to understand what makes us tick.  When we accidentally talk in words that exclude you, point that out so that we can be better together. Try to see our good intentions even when sometimes the ways we tell our stories offend you. And when someone talks about testing in a developer conference, go listen and tell me what you learned, show you care. 

  8. Dedicate a year of your life to learning to be good at exploratory testing, understanding organisational systems and dwelling into details of how business works. That way, I will have a colleague that would be interested in the same things I'm interested in for that year. The product owners tend to keep more distance to practicalities and believe in us just getting it to work, so they won't help me much as great as they otherwise are. And if you all do it, one after another, I will have true colleagues for years.
I still miss having people like me around, and will seek out bigger products with many teams to find the connection I'm missing now. I don't miss the times when we had siloed test teams and adversary relationships with developers, but I don't like hearing that my contributions are not needed when they clearly are.

As a minority of one, it could be nice that for just a while, the majority would work more on making the minority feel included and appreciated, welcome. My team tends to do that, while my community - agile - tends to have more challenges in that. If I wanted to stay where I am, I wouldn't feel bad about what happens in the world. But there's still many more places for me to contribute at, as I serve 2-3 years per organisation.

This tweet seemed very appropriate addition:
Stop trying to make testing go away when it still delivers relevant value. Checking is not testing. We need both, in a skilled manner.

Thursday, February 19, 2015

As a tester, I can represent a user

I'm writing my second post today based on something tweeted from SAST (Swedish Association for Software Testing) session today - just because explaining context just does not go into a tweet.

Here's my source of inspiration:
I replied this, trying to stick to 140 characters, and got a reply that warrants for more than 140 characters.

This just reminds me of so many of the discussions I've had over the last few years, driving my team towards continuous delivery that I need to outline some of my thoughts - still as a testing specialist.

  1. Quality the team is able to produce without manual testing may differ greatly
    And when it does, having someone who sees problems (testing specialist) while others don't can be a very useful thing to have. When you use smart and thinking people, you probably will use them for two main purposes: exploratory testing (approach, as opposed to scripting manual cases for regression purposes; combine regression to testing changes in exploratory way) and increasing the level of test automation. I don't want just the latter, I want both.
  2. Breaking things and fixing fast can have business value And when it does, it still interrupts the flow of new features in the development team to make the fixes available, especially if the fixes require significant effort. You might not want to block the flow to steal away capacity to support things in production you could easily avoid just by including a bit of exploratory testing in the process.

    When I say can have business value, there's a story behind this. I learned with one of the products I'm working with, that fixing bugs quickly make customers like us - since competition is slower but still buggy and customer base is used to waiting for fixes. We just couldn't get as many real valuable features out when bugs from production kept interrupting our development flow. It's just an opportunity cost: time used on fast bug fixes could be time used on value that is many folds the cost of the testing that helps us stay more in the right flow.
  3. Before delivery, there is thinking time embedded in the change task
    While we want to deliver the value to production with continuous delivery/deployment, each value item we implement is thought of without pressure to allow thinking time. I wouldn't want to think development teams hack random solutions without thinking them through - that would be unprofessional. Fixing same thing many times isn't the learning intended.

    So, why thinking time for "developer" is perceived as ok but thinking time for pairing two people, "developer+ tester" is perceived as blocking continuous flow and learning, instead of actually amplifying it? When you test a change in an exploratory (skilled) fashion, your thinking will include things that should not have changed. That is the essence of regression testing. But with exploratory approach, it is never just regression testing.

    We should not only blindly chance but also think about what changes. Exploratory testing, seeing our change in context of the product seems relevant to me in addition to theoreticizing about the change (designing it to work in collaboration).

    Refer back to point #1. Some teams think great without a testing specialist. Others don't. The ones that don't, learn to think better when they blow up the production in relevant ways many times - if no one has kicked the poor developers into a corner where they just assign blame instead of learning much of anything. Some organisations just need support on the culture that allows for learning, in my experience...
  4. Real users come with an opportunity cost
    Users too see problems testers can see. Some users see the problems right after they are created, other users report back the problems six months later when we no longer remember what we changed that broke it. And we value fast feedback to learn and to be efficient in the fixes. Users need to be seriously annoyed before they take time to report - the old wisdom of "every 10th user only complains" that you can see all over marketing literature most likely still holds.

    And the real users do not exist to report bugs to us - they have a different purpose to serve. My users try to deliver construction projects with support of our software. When our software does not work, we take them away from the thing our company makes money on to make them our testers. Since there's an opportunity cost again (time to them running into problems and reporting them is time away from something else they could be doing), it's kind of easy to see that we may want to invest on having someone (everyone in the development team for that matter) doing the best we can to make sure they get interrupted as little as possible.

    This one is my pet peeve. For some reason, almost all development teams I've had contact with seem to forget that money from someone else's budget is still money. The business we're in (Facebook on my mind...) may suggest that users cannot just get out and leave when they feel like it. And we may have mechanisms to not annoy same users with everything we break (throttled deployment just to a small portion) that help us mitigate the time we waste on using users as our testers. They may forgive us by the time we do that to them again.
  5. A skilled tester can represent a user - and a few dozen other stakeholders
    Many times I hear an implied (or direct) idea that testers are not real users. I'm also very fortunate to hear from my team's developers the surprise on how I can see things that the users will complain about, when they cannot. Skilled testers can represent users. And many other stakeholders too. Numerous times I've addressed things related to business aspects of the product; user flows that would create the optimal value increasing the value users get; legal aspects; how we support the product in production; concerns of future maintainability - just to mention a few.

    You can also learn about some things without involving the users; involving the users when you have done what you can rationally do. Think before you deliver. In collaboration. See point #3.

    People who think testers are not representative of real users and stakeholders may have run into unskilled commodity testers. There's a lot of those around, and they create a bad reputation, most often because their organization's culture drives them into behaving like idiots. 
So in the end, I probably agree with the statement that we often sloppily hide manual regression testing in the term 'exploratory testing'. But not seeing exploratory testing as an approach that includes a miss of regression and new feature testing for any chance that we do, with a varying scope in relation to the change we are implementing and automation that exists seems off. The bugs we could find with using the product in a thinking manner in short timeframe are faster feedback we can react on without the cycle of involving users. Outside our automation, we are still interested in the surprising side-effects that we did not intend to design and implement into the system, and there's nothing that beats exploratory testing in finding those issues - and then perhaps automating some more based on what exploration taught us. 


When adding testers, manage the risks

On Cem Kaner's BBST Foundations, there is a discussion about the ratio of testers vs. developers - and a great bunch of articles pointing out that there is no consistent way of counting what is a tester, what tasks such person is supposed to do and that there are many activities that are sometimes considered testing and sometimes not.

Build-and-release responsibilities are probably an easy example of grey area, where at least I see a lot of differences between organizations. Test automation mentions testing, but is another grey area. We really don't even want to single out the "tester" and "developer" work, but discuss who eventually will do which task.

I'm not sure about my sources in detail (could be BBST materials), but I know I picked up this detail from Cem Kaner years ago, and have applied it in my work since: If you get your first ever tester into your organization, quality might go down. The reason is that people start assuming testing belongs with the tester - now that there is one. I remember Cem pointing out that you might need more than one, when you start building a team. And this advice is from years back.

I was inspired to write about adding testers from this tweet:
I had a completely opposite experience 2,5 years back, when I joined my current organization. When my organization added a tester, quality went up.

The tweet is very much a lossy medium and the reasons for quality going down could be many fold. But the first idea that comes to my mind is that none managed the risk of perceptions of testing being tester work.

When I joined my organization, I put significant effort in emphasizing regularly in communications that whatever I will do as testing isn't in any way away from the developers work load. I helped my teams' developers find time for their testing (and fixing) that did not exist before - at least that is how they perceived it. I regularly asked product managers previously doing acceptance testing to keep doing it, giving specific requests ensuring they would not leave it undone just for the reason of me joining. There was one of me, and almost 20 developers for the two products I worked for.

We recruited a remote tester, but there was still 2 of us to 20 developers. If the ratio was higher, getting my message across about the risk of perceiving testing as something testers do would have been more difficult. 

Lesson learned: if you add testers without managing the risks, quality can easily go down while you think you're investing more into it. People just tend to think that way. I still manage the risk every day, and see occasional symptoms of some developers relying on me catching what they miss - for not testing themselves.

Saturday, February 14, 2015

K-Cards meet AONW

AONW (Agile Open Northwest) is a conference based on open space technology. It wasn’t my first open space, but it was the biggest open space I’ve attended so far. I joined with the idea of giving open spaces another chance, as I’ve never really liked them that much in the past.

On the first day of the conference, I was already getting upset after the two first sessions. I felt I didn't have the right personality for this style of conference. I’m not a quiet person and most people who know me refuse to see any of my introvert aspects.Withnew people, however,  I might be shy and polite. In a crowd in an open space where most people feel comfortable shouting their comments with great timing or just a little on top of each other, I was just feeling that while I had something to say, there was no room to contribute.

I could have left the sessions and moved to a smaller session, but the topics I participated were the topics I would have been interested in. And yet, I wasn’t able to get my ideas in, because of who I am and what I’m comfortable with. 

In a session about women in tech, I felt particularly in need of commenting on something that someone else had just contributed, something that was a direct continuation of what was just said. I missed the window of saying that, and did not want to play the ping pong of going back to an earlier point, when we actually already switched the discussion thread. 

I zoned out and paid attention to dynamics in the sessions in the afternoon. There was one that I really enjoyed, with a very small number of participants, and in particular, no dominating discussers (other than me perhaps…). But mostly I was paying attention to the fact that very small portion of people were discussing, and whenever the discussion was lively, we would do ping-pong between different threads so that the discussion was hard to follow. 

I was seriously missing the k-cards we use in context-driven peer conferences. These enable discussing one thing at a time and allow for people to get their chance of saying things without shouting over an active, dominating discusser. So on day 2, I proposed a session on “Open Salary” that would be facilitated with K-cards. The topic I picked up from discussion at a coffee table, noting it would be controversial and perhaps even a heated topic.Also, as the volunteer facilitator, I knew I would suffer if  I could not personally participate in  topic closer to my heart.

I set up the session with post-it notes of four colours, each pile of four with a number on them - a quick-and-dirty version of the actual K-cards. I made a flip chart with the colours and meanings, and started the session by explaining roles (facilitator, content owner) and cards as way of signalling your need to contribute (new thread, same thread, speak now, rathole). From the cards I created for participants, I could tell there were 10 people (plus myself) in the session to begin with. And during the session, one helpful participant introduced 11 more people joining in to the signalling system, getting up to 21 participants. 



Since I was facilitating with the cards and numbers, I can go back to my notes of how the discussion flowed. During the 50 minutes of actual discussion, we had 9 new threads, where one person started two threads so I had 8 individual contributors. In total, 80 turns in getting into the discussion were asked for. Out of these, 4 were red cards, where the person in question felt what they had to say was so urgent they could not wait for their turn. Once a red card is used it is taken away, so they can't keep interrupting. 

The longest thread was about willingness to share one’s own salary, that went on for 25 turns until three people flashed their blue cards, indicating the discussion wasn’t going forward anymore. In that discussion piece, we learned for example that the group had a scrum master with 120k annual salary and a scrum master with 65k annual salary, and that asking for 90k can result in getting 120k, so it’s not just about how well you negotiate. 

Out of the first 10 people, everyone contributed to the discussion. Out of the total of 21 people that were there, 16 contributed to the discussion and 5 did not. 

To show how the turns were divided, I collected the data together into a graph. The most active  person asked for 12 turns and on some of the threads had to wait for the people who had contributed less to get their say on the topic.



There were moments when I wished I was keeping better track of the threads.  I was also reminded how important it is to name the thread we are on and label it so that  the audience is able to  stay on a thread or know that they are starting a new one. 


I hope something of this might stick to some of the participants and I personally was thinking about organising an open space where every discussion session could have a facilitator helping out the content owner. I know I would enjoy it more if I did not have to fight for my chance to contribute, and I’m pretty sure I’m not the only one who stays quiet and listens just for the unwillingness to fight your point to the right time between the other enthusiastic contributors. 

Tuesday, February 3, 2015

Serendipity in testing

Two years ago in a bus somewhere in Ireland, I had a chat with Rikard Edgen that taught me something relevant I thought I should share today. I was talking about tester luck: not being able to use pretty much any software without seeing it fail in various ways, sometimes even without intention. Rikard pointed out that there is an actual word and concept for it that he discusses in his book Little Black Book on Test Design: Serendipity. I needed to know more.

Rikard pointed out that if we as testers talk about luck on something as core to testing as serendipity is, we're not helping non-testers - people who do not have the same experience of serendipity - understand and value what is special in testers. There is a reason why regularly, consistently, my team members ask out loud "How did you find that, really?".

Serendipity is a "lucky accident", but in testing, it entails more than just an accident. Those of us testers experiencing serendipity tend to do something to push their luck. Luck favors the one who intentionally varies their actions, understands how things that seem the same can be different and relentlessly keeps doing things differently.  And the more I test, the more ideas of what could make things different I seem to experience.
"The more I practice, the luckier I get" - Arnold Palmer
Here's some of my examples of serendipity in action. Regardless of knowing the theory of error after the issue was found, I cannot claim I actively thought of things like that while testing, at least not that particular moment I run into the issue. These are just four ones that I particularly remember, serendipity at play happens all the time.
  • Getting HTTP 500 instead of HTTP 404, resulting in a program error
    This issue is what inspired me to write this post. I was testing a new feature today on a new application we are working on. We had just introduced authorization feature, indicating that the users should not get to see pages on the application they were not authorized for. It has been tested, quite thoroughly by the pair of developers implementing the feature, and I was not intending to do a deep test on that, just as we had agreed in  the team.

    I created a few users with different levels of rights, and made notes of pages the higher rights level had and the lower rights should not have. And while doing the notes of positive cases, I decided to try a few addresses without any preparation on my part. I was about to change the end part of the address to point to a non-existing page, and with the idea of writing some garbled text, I first tried just removing a final "s" from "users". Unexpectedly, the very first test I did ended up showing me a program error, a case we did not handle for the users. A case that I thought was specifically mentioned in the Jira issue about this change.

    I tried the garbled text, the real pages and all there seemed to work to the extent I was testing them.

    The surprise from the developers was clear: how did I find that? Detailed analysis of the problem shows there are 2 pages that in this application give a HTTP 500 response instead of HTTP 404 as the feature was designed to handle, for a reason of having controllers of those names on a different level, causing the application to crash - a technology specific problem.

    The reason I tried that was that a small chance to the name of the page seemed to make different sense than a bigger change. And trying a few that appear in any way different for me just makes my life much more fun - with or without the problem.

  • Galumphing around, resulting in a program error
    This issue happened just less than a week ago. I was testing a feature I had tested hundreds of times before, and feeling a little impatient. Impatience changes how I use the product, I started clicking around the user interface, pressing buttons, inserting and removing text - a lot of inert actions that should have no impact.

    I double-clicked on one out of the many radio buttons, and was presented with a program error dialog, much to my surprise. I isolated the steps to repro to just a double click in a specific location, and did some research around the product if similar behavior would appear elsewhere, without running into such cases.

    Again, the developers were asking how did I find that. And again, I have no better explanation than saying that I vary my actions. It sounds great to be able to refer to galumphing, a technique coined by James Bach. I could immediately recognize that in use after I saw the problem, but while I was testing, I was just after variation to push my luck to never feel bored.
     
  • Sampling places and technologies after configurable product name change, resulting a broken old feature
    This issue is a little older, but I remember it particularly well because of the team discussions it caused us to have. We had been implementing a feature to make our product name configurable. The developer pointed out there were 57 instances where he had chanced it, and that he had tested the changes himself. I was about to look at it with another pair of eyes.

    Knowing the product from perspective of features and technologies and user scenarios. I implicitly, without consulting anyone, decided to check a few places - there was no way I would repeat going through the 57 instances he had already tested, our product just isn't worth that level of investment for this feature, not with other things in the queue. Testing more of this would result in testing less of other more important things.

    I opened the first dialog I had in mind where the configurable product name should be visible. The first one on my selections was a feature as deep in the application as I could imagine. And to my surprise, the feature did not work at all, as the dialog I was trying to open would not open.

    After the surprise of me running into that as the first thing when testing this, the developer came back with the results of analysis. Out of the 57 places, I had run into the only one that did not work, and the reason was that this was with implemented with another technology. I can claim after the fact that I knew that (I did), but that did not really drive my decision. 

  • Bookmarking a page, resulting in program error
    This issue is the first issue I started with at my new place of work at Granlund. On day 1 of new job, I was going through the introductory motions just as anyone else. I was shown the application and as I was about to head to a meeting that would introduce me to the ways of working, I quickly bookmarked the page I was shown to get back to it later, knowing I had no way of remembering things without notes as I was getting so much info. As I came back from the meeting, I was about to start testing. I went to my bookmark to log into the application, only to see a program error that blogged me from logging in.

    After analysis, I know that on a huge application with a lot of different areas and pages, I had been lucky enough to bookmark the only page that could not handle a direct link. This idea of a test is on my list of things I do, but I did not try doing that on day 1 of new job. But serendipity had been a sure way of making an impression.
My current project reminds me that a lot of testing is serendipity and perseverance. Vary to push your luck, be open to possibilities, explore the limits of done. Keep trying more when you think you've tried all that is relevant - it is never too late.
It's not that I'm so smart, it's just that I stay with problems longer. – Albert Einstein
Sounds right to me.