Tuesday, May 31, 2016

Backlog-filling specialties of testing and UX

There's a major change going on with work as I knew it. Half of my team's developers are moving to other work, and I'm bound to analyze on what I feel is the right ratio of testers - or rather, if it makes sense to double it.

Over the last four years, I've learned that one good exploratory tester can both help developers hit the mark better AND fill the backlogs with work undone (some might call these bugs) to an extent that it makes sense to add power to fixing, not testing. The information testing provides gives us little value if we are unable to react to the feedback. There's some types of information that is useful to know even if there was no fix applied. But that is more of a special case.

Since start of this year, my team has had not only a tester but another person with very similar goals, working on a more limited set of problems: UX (user experience). With an existing product and loads of ideas of improvement, I see the same pattern that emerged when I first joined. One person can easily fill the backlogs with work undone, this time from a UX viewpoint.

So all of this leads me to think of the idea that all too often, both testing and UX are backlog fillers. We create plans without action. We rely on developers to take the plans, fine-tune them into something that can actually be implemented without losing the value we hope for, and turn them into actions through code.

While it is worthwhile to try to figure out things before we're building it, there's still a whole lot of stuff we're figuring out as the product grows. This is true for both testing and UX perspectives.

Plans without actions make us backlog-filling specialties. I find it fascinating to seek for better ways to do this, where the effort won't go into building inventories and going through them for prioritizing them into the pipeline, but instead, have the expertise readily available. Unsurprisingly, the best mechanism I've seen for this so far is Mob Programming. No more plans without actions, but plans with actions and a good follow through.

If it only was easy to get people to do it, other than the practice sessions... 

Tuesday, May 24, 2016

Ways to get to continuous delivery

I listened in on a talk today about Test Automation and Continuous Delivery. Paraphrasing what I think the speaker said is that:

  • None she knows of does daily deliveries without test automation (and stays sane - we heard of an other example like that though where "expecting normal workdays would just not be appropriate") 
  • We need to stop coming up with excuses on avoiding the test automation
In the end of the talk, I offered a counter-experience to understand her ideas a little better. I work with a  team that has, for two years now, done daily deliveries and with ridiculously little test automation. I feel we are still very much sane, we deliver quite safely. We are working - slowly - on adding some test automation, but for a long time, there was none. It took us long to build up skills for the right kind of automation we would find useful.

What I believe I learned is that we had an interesting discussion about our beliefs in what to invest in.

In her case, the problem she was solving was that testers were very frustrated with finding simple bugs in manual testing. That is a problem we sort of shared.

In her case, her team sought for solutions in the testing space and went for test automation. She told the audience that their automated tests are valuable, and find bugs pretty much on a daily basis. 

In my case, my team sought for solutions in development and testing space, jointly. We had a limited amount of things we could do at a time, and conversations with my developers lead us to decide on investing primarily in cleaning up the code to make it more readable. 

I asked the speaker today how their code was, and she confirmed theirs was messy.

So my train of thought is: you can choose to invest in making things better for your testers. If you feel you can invest over silo limits (not within the the testing silo), these two might be your options. You could get to same or in my experience, a better place investing into clean code. But most people seem to go for test automation as the primary investment, even with unclean code. 

When your code is clean, your devs can have discussions around it easier. Devs share it easier, ask for feedback more eagerly. Devs make less mistakes as you go about changing it. With less mistakes, there's less of the simple bugs to be found in manual testing.

And when you top that with smart, adaptive exploratory testing like my team does, it is possible to do continuous delivery without automation. Automation would make us faster, but not necessarily safer. The cleanliness of code is what seems to have a connection in how we experience safety in customer feedback. 

I don't think this is an excuse to avoid test automation for us. The excuse is that there has been a lot of skills to build up, and we chose to build those skills clean code first, unit testing with asserts and approvals and selenium tests later. And we're just still taking baby steps in the automation front, but doing great on the clean code front, reaping the results in our ability to deliver without breaking production. 



Wednesday, May 18, 2016

Technical Assets and the difference for testing

This Monday, we released something we've been working for way too long. To get to this point, we've deleted massive amounts of code and refactored (with the definition of it can be refactoring without unit tests with R# in heavy use).

For a period of time, there was a separate code-base for every configuration of the environment. I think we were up to five by time of cleanup.It was a tester's nightmare. If you'd find a problem (and there were plenty, more than anywhere else in our product), the problem was twofold. First of all, the problem was likely to be elsewhere, and it was likely the developer would forget one or more of those places when fixing - not to speak about how slow fixing was. Second, whenever something got fixed, something else somewhere broke. It felt like a house of cards.

Seeing the difference in how this felt was easy to see in my team, since every other part of our application has a different feel to it. When there's a fix, I can count with one hand's fingers the amount of side effects. Fixing feels quick. Also, the fix gets very often applied to different places at once. Surely, sometimes there are side effects but rarely.

The feel of the first and the latter is like night and day. With code like the first, one of me isn't enough for one developer. For code like the second, one of me is plenty for 5 developers.

I've looked at the ongoing discussion long enough to see where the difference I'm experiencing first hand comes from. It comes from skilled developers who build smart technical assets. They actively build and share components to have things done in one place. But the biggest difference comes from communication: they find the components, because they talk.

Mob Programming has increased the amount of practice level talking, and I expect that the continued mobbing will lead to more technical assets - one line of code to do a thing that takes hundreds when reinvented. And the impact on how we test is profound. It's the way it should have always been, but wasn't, not for all parts.

We all do better when we all do better

To take a step back to defocus from a complex and intensive testing task I need to criticize myself on, I'll blog about something old. Something that has stuck with me all these years, and that has amplified over the decade I've known it.

The Finnish Testing Community Scene, when we started bringing it together 10+ years ago, coined in many ways to one great man taking initiative: Erkki Pöyhönen. Eki is still around for testing, very much so, but nowadays so much hands-in-dirt in the projects that I get to see a little less of him. Eki is an extraordinary person, a connector of people, and I've learned a lot from him and with him.

The grand wisdom that made me exclaim how much I appreciate him, is that a long time ago, he told me (one of his) philosophy: Promoting others promotes you.

At the time, that was an important lesson for me to learn. As an individual contributor, I was working hard on the idea that I need to make something out of myself, working on my brain, my knowledge, my skills and abilities. And everything I did, Eki would notice, mention and speak about, showing appreciation, promoting. I loved him for what he did, and so did everyone else. He finds the good stuff, everywhere.

We all do better when we all do better. Today is a good day to remember to appreciate all the great stuff that happens without pointing to all the not so great stuff. Eki's advice did not sink in for me in one go, but was a long process of personal experimentation with the idea, seeing the comparative results.

Thank you Eki for making me a little bit better person (for myself). And happy names day.


Tuesday, May 17, 2016

A day in the life of an agile tester

Yesterday, at the end of the working day, our product owner approached our team area, at a point where I was the only one at the office. He goes to explain a piece of feedback from a user, and we pair to understand the feedback as an example with the product.

We think back to what we had agreed on the feature, and together we learn that while it works as specified it does not work as we intend the customer experience to be. The missing feature is kind of relevant, even blocking. And for us, with that joint learning, the feature request becomes a bug.


We don't really care about the label, we just care about the customer. And we need a change.

Appropriately, the morning after there's a regular meeting for us, the team and the product manager, to improve the world as we know it. The previous evening, I had already alerted my team on this, and we had just the right people around the table in solving it. The developer brings in the deep understanding of current implementation and we learn this will not be a minor change.  However, the developer states "I can have it done by tomorrow".

I smile with the idea that he forgot to ask if I'm available for testing it with this schedule, especially knowing it will be an evening thing even if we can do bits and pieces throughout the day. The final experience, the product that speaks to me, will not be fully available before the changes have been done. But I'm not the gatekeeper here. I know that there are days after tomorrow, but I don't mind him taking a stretch goal. I know our product owner well enough to know he has flexibility when we talk about days, not this month's release vs. next month's release.

After the meeting, the dev messages me, unprompted. He tells me he realized that he might have been optimistic on schedule because we had pre-agreed collaboration with another department  and a weekly meeting to interrupt the day. And that he's sorry for not thinking about my schedule. We agree not to worry about estimates (we are very much in the #NoEstimates idea) but focus on what helps us forward, without haste. Thinking over deadline.

Throughout the day, the developer mentions things he's changing, kind of as a checklist for me to see what to cover. But for me, that is a checklist of things he already covers. I give him as much ideas as I have, help him work through the minimum scope and risks.

Late in the afternoon, I know that it will be a long night before it's done. Working on it has revealed a bunch of dependencies and connections. Instead of telling  the product manager that the developer was overly optimistic, I mention that I will not be available for quite as late as this would require. And it's better for the two of us to take a look at it without rush. No disagreement, work continues.

4:23 pm the developer mentions he just checked in the code in a branch he worked from and asks for help. I test, and I find 7 issues, out of which 3 have been introduced with the change. He fixes all but two. One is related to a complicated set of rules and we agree to take more time on pairing on that. The other is an inconsistency that has been there before, that I just hadn't paid attention to before. I know that addressing the one left behind is only a matter of days anyway, and we agree it can wait now to keep our eyes on the customer need.

There's a lot we could improve here. But there's a lot of good here too. And it's just a day in the life of this project. And my hat tip to having great developer colleagues.

Experimenting and the agile world


If James Bach's 1st slide made me explain, that reinventing the tester role needs, in my experience, be much more than defining the role, this post is different. This post is about the ideas attributed to the AGILE WORLD. Matt Heusser was quick with his response.


I find that tweeting this slide is not an endorsement. Reacting to it is not giving visibility to bad things. These bad things exist, and we need to speak about them, to work our way through them just like we've worked (and are working) our way through the ideas that testers and agile don't belong together.

This slide is awful. Let me start with the most awful thing on it. The attack on Lisa Crispin.

It's time the 15 years of bullying Lisa Crispin to stop. I would have broken down into pieces if I was attacked as much as James Bach is attacking Lisa. I wanted so much for James Bach to like me that I looked at this going on for years. I'm ashamed of my behavior.

Even if it was true that Lisa's  (and Janet's)  book doesn't talk about testing with some weird definition of what qualifies, none of that justifies the attack that goes way beyond content debate and resembles holding a grudge on something that I can mostly interpret as being angry over voicing out the fact that the world is changing. That interpretation is mine, and probably very incorrect. I don't really care why this is going on, I care that it should stop already.

I know many testing specialists who cite Lisa's book. That claim on the slide "not cited or thought of as a testing book" is ridiculous.

This leads me to the second item I have to write on. Testing is testing is testing. We speak past each other. This has become more clear to me, as I started having long, deep discussions with a friend who identifies as a developer and very much into testing. Whenever he says "testing" and talks about what testing gives you (spec, feedback, regression, granularity), I would make a wry face. When he said testing, he talked about a very specific type of testing, which might as well be the only type of testing he knew (I doubt it, although I recognize that his exploratory testing awareness and ability has greatly increased since our collaboration and he represents a professional tester quite well nowadays).

It helped me realize that testing as he understood it is a majority view. To change the majority view, it isn't helpful for me to say that you use the words wrong and what you actually do is what we call checking. Instead, I would talk about the part that he was missing, calling it exploratory testing. Even if all testing is exploratory. Even if their testing included some exploratory aspects, it was more constrained than what my style would add as long as it was centered around creating an artifact, automation.

His kind, the test-infected programmers, have done a lot of good for the world of testing. It saddens me that there are still too few with those skills. The devs I work with are amazing in many other ways, but not test-infected. They believe in code being readable and understandable helping them avoid bugs, and that is working well for us while we find ways of building our unit testing skills.

Phrases like “Testing is boring but coding is fun" are statements around the other definition of testing. We agree, I think, that the mundane going through same moves is boring. And the test automation movement is about finding ways of turning that problem domain into coding and maintainable structures. The tooling has gone forward a lot with agile and developers coming into the problem because it was made "coding".

Devs that are test-infected don't deserve to be hearing that what they do does not qualify as testing if it does not qualify as all the testing. And it qualifies for a much bigger part than the tester's community, in its defensiveness, seems to be giving it credit for.

(yes, I know there are bad developers out there. I'm just lucky not to work with them. I'm not cleaning up after bad developers as major part of my testing.)

When we learn more about meanings of words in common use, we add labels to them, not redefine other's words. There used to be one kind of guitar. With the invention of an electric guitar, the old guitar became acoustic. That's what I feel should happen to testing. Unit testing is the electric guitar, and exploratory testing is the newer invention for the majority of the world. I don't need to reclaim my words, I need to learn to communicate.

The old grudge argument is also very much one that I observe the first claim with. "Agile was not created by testers or with testers. It's programmer's utopian vision." Agile, as I experience it, is reinvented daily. It's not created by programmers, but by people. And it's about people. I've been a major part in defining what agile (and its success) looks at my places of work. And I see people meeting other people in conferences and work places refine the ideas and ways we speak about agile. Agile is evolving, and looking at it as it was 15 years ago seems off. After all, it's all about learning - many of us have learned a lot in 15 years.  

Agile is about getting people together with a shared responsibility, not on testing but on the product we're creating - and testing as a part of it. Some people are more difficult that others. Some people are always right, they argue. And agile has found a way of dealing with this, experiments. Let's try everyone's way. Instead of debating for the merits without the hands-on experience, let's experience it together, try to make it work instead of proving it's awful and then see the results. Failing with an experiment and learning is good. But this gives us a mindset of openness to solutions that are outside what the experts might have thought of. And it makes every agile project different from one another.

I said different things about AGILE WORLD before I lived through years of day-to-day life in agile projects. Some of them struggle more than others.Ones I've been in have made magnificent end-user-experience improvements on product quality (as value of the product)  and quality of life. I love being a testing specialist in agile projects, as it enables me to be good at testing in a productive way (continuous impact on what comes out).

Your mileage may vary, but it does not take away the fact that I'm experiencing something good.

The question should be: if the bad agile is common, what is it that we who get it to work are doing differently? We don't know, could some of the protectionist energy be used to help understand that better? Because, it seems to be working for a lot of us. I think it might be just about kindness, consideration and respect. Bullying others for disagreements of opinion is not accepted.

Need to reinvent testers?

James and Jon Bach are delivering their course on Reinventing Testers as of now, and as I'm not there, I can only rely on the glimpses twitter has to offer with #MakeTestingGreatAgain. 

One of the first things that caught my eye is a slide on Why is there any need to reinvent testers?


Point 1: "Because I am a tester and I need to improve myself"

Ok, so I am a tester. I'm not a tester just by role, I'm a tester by profession. I'm a tester by identity. I've been a tester for 20 years and it's (professionally) all I know how to be. I feel uncomfortable when I was last Friday referred to by a customer representative as being "the programmers".

There's two ways for me to work on this.

Option 1. I can decide that what I am is what I am and that is not going to change. I can (and have) apply job crafting to reinvent tester to be whatever I want and am able to do. So far I've crafted my job as a tester enough to have other testers (and in particular researchers of testing) tell me that I'm not a tester.

Option 2. I can work on changing my identity. I've already been working on confronting my love of the tester identity by representing myself as something else. I've joined hackathons representing as a programmer. I've deliberately joined discussions with groups that don't know me representing a business person (easy to fool people because as a tester, that is a core to what makes me so good), a UX specialist (not that hard either, I've always cared about design and it's an area of testing feedback), a programmer (needs interest in technologies and some knowledge, but surprisingly many programmers know very little as well) and a project manager (like a business person, but simpler as the world of opportunities is narrower). No matter what I represent, I'm still me. And all this experimenting has given me new found respect for what I am and that it is useful. I don't have to be a tester to be a tester.

I believe that a deliberate focus on things we label "testing" is what made me the tester I am. Earlier in my career, I could have ended up developing different traits than what I find useful as a tester. I could have chose not to stare at the screen while testing, listening to my inner voices telling me what I observe and deduct. I could have been tempted to take the easy route and just manage testing when I chose to dig in deeper in doing it. I could have fallen in love with code and let it take me as it has taken my colleagues. I saw people like me model after other people than I chose to model, and end up not so great at testing. It takes deliberate practice. And hours are limited, until you look at the hours on a long enough timeframe.

I've started to see that with Option 1, I'm like a fireman at the time when sprinkler systems changed the world. I'm ready to become an arsonist or encourage people to being arsonists just so that I would be needed.

Testing is important skill, but it's a skill that no longer belongs to just testers. We have found ways of making the need and pacing of testing very different (agile & the business environment change). I find it necessary to challenge the status quo now, but rewriting things back to the "good old days" isn't my choice as of now.

I agree, I need to improve myself. But what the improvement means lies my disagreement.

Point 2: Problems: Craft, Companies, Programming, Expectance of low quality

All these problems are problems in the industry, with a large enough sample. Within the sample, there are examples of places where these are and have been addressed. Could it be that the ones within the realms of agile actually have found ways of doing things better and actually reinvented testers in a way that the masses are just not ready to accept.

I wish I would have been blogging longer, and I could show you in detail how my perspectives have changed. I've lived through things in my own project at work. I haven't just analyzed it from afar, but taken change of it as my personal responsibility - with my team and my organization.

In a way, I don't care about the industry. I've seen that within the industry, there are companies that do it better. How about opening up the channels to truly listen to those who work with this, and stop telling them that their experiences must be incorrect because of someone else's experiences.

It was supposed to be context-driven testing, what happened to context when defining and discussing this stuff? We lose context, and muddle the waters by combining the worst of the industry as a motivation on how we, as the profession of tester need to improve.

No thanks.

(Note: I'm not open for a debate on this. Debate, as it happens in the testing world is a form of bullying and I claim my right to not engage in needing to defend myself. Instead, I'm open to a dialog. If you really care about why I feel the way I do and what are my experiences that make me feel the way I do, I'd love to work on those. Or if you want to explore on why you feel differently and how your experiences differ from mine. I want both of us to respect our varying experiences that define what we see and emphasize.)

Monday, May 16, 2016

Explore a world without roles

I find that there might be a connection of the loud "role of a tester" discussion and messages like this that make us realize we are not as irreplaceable as we'd like to think.
It is very much same as the discussion I had after my session at AATC with one participant. High-level managers are looking at their organizations quality-related problems and coming to the conclusion that the highest priority change is to fix the source of badness, and testers as sin-eaters (a phrase I picked up from Jesse Alford) are part of a problem.

The tester-profession, on the other hand, has been trying to figure out value and role, to a point where it feels like rallying a campaign that has long ago lost it's focus and gotten lost on talking about role, not the problem we're trying to solve with a role.

I believe that in the greater scale of things, removing entire testing departments is good. One of the reasons I think this is good that looking at the ISTQB number of 500 000 testers, I've already seen my fair share of testers who just provide no value. There's also a fair share that do provide value, but there's more of testing value that developers can deliver than testers seem to give them credit for.

Removing the department of testing in the first phase wakes up the developers. They find ways. They will improve. I find it is often easy to become better than what you were with the separation with weak testers and weak collaboration.

But also, it gives the room for the new kind of testing to emerge. There will be things that the teams feel challenged with. And they will find people with exploratory testing mindset to fill in some of those gaps. They might want, primarily, that people with the mindset will also be able to code. Some get what they want and end up never using the programming ability to directly.

I'm growing tired of the focus on the tester profession as sub-optimization we're selling. I see the real-life problems with sub-optimizing testing, focusing on testing metrics over great products.

So I just want to explore this further: could there be other questions and approaches that would serve us better in creating the perfect world of software, where the special skills and abilities could come together than sticking to roles? 

Sunday, May 15, 2016

Step back from roles, methods and definitions - the answer to how is yes

The role-discussion, and in particularly the style of rhetorics around it bother me. I've resulted in active walking out from twitter several times this week, just to keep myself from commenting and ending up in the discussion that I see as not going anywhere. Neither side hears the other, and both use strong expressions to change the others mind. I try to accept that I really don't need to change the other side. But I need to to, every now and then, express why I choose not to debate. It's not an act of not caring or being afraid, it's a choice of good use of time for things I believe to take things forward.

As of now, I'm reading a book called "The Answer to How is Yes" from Peter Block. I'm only getting started, but the messages resonate. It talks about us getting entangled with "what works" over "what matters", and seeking out our answers in the how-space where things should be "Not our way, not one way, but the right way" even if there's actually many ways of doing this. Sounds like core to context-driven.

A quote that lead me to start reading the book is this one - one that Woody Zuill uses to explain that he shares his experiences, not a method when he speaks of Mob Programming.


Some 10 years ago, I remember a discussion with James Bach, face to face, where he told me that the peer workshops focus experience reports because whenever we would try to talk theory, we would just argue. Our experiences differ, but each experience is true. Each theory, method and definition that tries to generalize the world might not match all of our experiences.

I believe the roles discussion is one where we should step back and talk about experiences, and respect that fact that is already obvious: our experiences on what makes a good tester (role) differ. And each of us can explain things from our experiences, instead of trying to argue for an absolute truth of how in a world where one might not exist. 

Saturday, May 14, 2016

Software maintenance and why is that even needed?

There's a big change coming up with my product, cutting down the investment into development down by a third. With that decision, the big discussion that puzzled me is around software maintenance:
What do you mean you need 2 people just to keep it running without adding anything to it?
There's a big group of people who look at software as if it is something to build that after it was built can just be used without any maintenance. They feel puzzled on the whole concept of maintenance. What is maintenance anyway? It works now, why would it not work later on? You don't change it, how could it break?

I found myself telling a story yesterday that resonated well.

Imagine you bought a brand new car, straight out of the factory. It's all shiny, and just sits there in the parking lot looking beautiful. You don't need to drive it (you can), and none of the use really does much harm to it that you could see.

It sits there, throughout the year, open to weather conditions and people passing by with the bikes just barely too close to scratch your car. First it looks just dirty. Later, some rust creeps in. And rust is devious, because if you don't take it out, it will spread more rapidly.

Before rust, you will experience the winter conditions. You don't shovel out the snow, you won't even get into the car. The car is still there, but not quite as accessible as you would like.

You need to maintain that car, why would you think you don't need to maintain software? None of the things you invest in stays the same over periods of time. And the weather conditions in the case of software are particularly heavy, accruing maintenance ends up with expensive one-time maintenance. Some people would say that at that point you're likely to be in the point of rather just buying a new car.



Whom do I serve?

Someone I respect said in a private group discussion:
“And we serve the business stakeholders, not the programmers."
It left me thinking of whom do I serve? I too serve the business stakeholders. I collect and prioritize information to drive change from business stakeholders perspective. I drive the change through programmers, but I don't really serve the programmers. We both serve the business stakeholders. Together. With different skill sets for more complete service delivery.

There's a thought experiment that a friend walks me through occasionally:
  • What is the value of a tester if you have no programmers? 
  • What is the value of a tester if you have the perfect programmers? 
  • What is the value of a tester if the programmer never reacts to any of the feedback? 
Let's look at this from another angle:
  • What is the value of a programmer who writes programs that no one wants?
  • What is the value of a programmer who writes programs that don't work?
  • What is the value of a programmer who writes programs that cannot be fixed or extended?
The programmer value is less without feedback. It's not one serving the other, it's us being better together in serving the business stakeholders. 

At work we had a team day with business stakeholders today. It was great to hear how they praised the product we've created for them (one we still regularly bash for not being perfect). I felt the praise was equally for all of us. The devs wouldn't be where we are without me. I wouldn't be where we are without them. It's a symbiosis. 


Friday, May 13, 2016

A No Jira Experiment

I'm a tester and I like finding bugs. I like finding things I can praise too, but there is still something almost magical in the power of touching a software, bringing all our illusions of how well it works crumbling down, only to raise again stronger having addressed the problems.

Early 2014, I was trying to figure out how I could find a job in San Diego and one of the activities I did was to update my CV. I completely changed it from a list of jobs I've had to emphasize my achievements, and one achievement in particular left me thinking:
Personally reporting and getting fixed 2261 issues over a period of 2,5 years on Granlund projects.
That's 2,5 bugs a day. Every day of the year. Even weekends. Even holidays. Even days when I'm out of office at conferences. Even days when I do other stuff than hunt for bugs.

I could also talk about the type of the problems, but let me emphasize: I calculated bugs that got fixed, not ones I found. I worked like crazy to find better ways of targeting the information I would deliver, just the right time so that it would get fixed and cause least amount of damage.

You could look at the numbers and say that our software must be buggy. It was when I joined. It has transformed since, and some of that transformation is already visible in the number being only 2,5 bugs per day.

When I looked at the number, I quickly did the math. 2261 issues written down. That is 47,10 of my full working days (with 10 minutes time to write a report) used into just writing one-time documentation. 10 minutes is little, I've probably used on bug report writing a lot more time.

You know, I was trained as a tester. There was an amazing theme from Cem Kaner (my big testing idol) starting from his book Testing Computer Software (still the best testing book out there, Elisabeth's Explore IT comes close) called bug advocacy. It emphasized good reporting. Reports are our signature. So I've learned to write some pretty good bug reports. And often thinking of a neutral wording, the most representative example of the problem and clear steps to reproduce easy and complicated problems just takes time. This is time I often spent alone. Shared time with developer started when the report was done.

I decided to try an experiment. I would give up on knowing how amazingly good a tester I am, through numbers. I would stop writing my bug reports in Jira. I would scribble on post-it notes. I would use the time I often use in isolation in collaboration, dragging a developer to look at the bug (and fix it under my eyes, but these bugs were already getting fixed before). I would, when working remotely, favor personal messages or screenshots on the team channel when I saw bugs over writing a task. I would do everything in my power to use the time on bug reporting as time with developers.

You probably guess what happened: I started having to find less bugs. The developers started to be more amazing. I started to feel less alone and different.

If there was a manager looking at how I'm doing, they'd say that she finds, logs and gets fixed less bugs now. But the true interpretation is that now I'm not creating bug reports, but the end result is better.

None of us misses the dead documentation that holds us down. That energy, into keeping Jira true to facts, can be used elsewhere. Another lesson from Cem Kaner: Opportunity cost. It's not just what the accounting cost is, but also, what you don't get to do with the time you used here. Let's try to make good choices. 

Boxing the tester

I write this post out of slight frustration, to clear out stuff in my head. The frustration is related to discussions that keep coming up in my tweet stream. There's a few themes there:

  • Are we or are we not preventing bugs? (I don't care, except for the part about early and continuous involvement of perspectives for full picture)
  • Are we or are we not making decisions on releasing as testers? (I am, and it seems to be working well for me. I have friends in testing who are not because they feel risk-averse. It's not a role thing)
  • Are we called quality assurance since we don't assure anything? (I don't care, I prefer being a tester but would hardly want to focus my energy on just a term without relevant practical implications)
I love the products and making them great and valuable. Testing is a means to that end. Testing for me is something I'm really good at (and not so humble about), and I love working with people who are good on the programming bit because together, we're magical for the products. The longer I am in this industry, the more I break out of the boxes of the "tester role". I code. Not test automation, but just regular production code. I work on technical designs and architectures. I make decisions on those, just as much as people in my team. 

Today was a great example of me driving through a collaborative decision-making process where we'd hear from every team member before I eventually summed up what we decided. Consensus is something the Swedes do well, and Finns struggle with, but we seem to be a lot better at that culturally than e.g. Americans who seem to assume hierarchy to extents I never experience. (Sorry for the boxes, I realize they are incorrect and stereotyping but I can't resist using them anyway) Most decisions work well with consensus decision making, except for e.g. firing. Release decisions are typical consensus decisions for my agile team. We've been delegated the full decision power. 

So, I'm all around the roles. I identify as tester. Sometimes I identify as a programmer. Sometimes I'm a UX person or a product owner. Sometimes I'm a manager talking to managers outside my team's scope. Sometimes I'm a facilitator and a catalyst for improvement. 

I used to care a lot about the identity of a tester, I wanted to be called a tester. The more of the "defining tester" arguments I see, the less I care to identify with that straightjacket. But I wanted to talk about why I think the boxing is useful.

The software industry doubles every five years. This means that half of us have less than five years of experience. I think this idea comes from Uncle Bob, even if it reached me through other people in the programming community. When we have less experience, we need a clearer box to learn to cover first, before breaking through the box and finding a bigger habitat. 

I've worked with newbie testers, who were made developers just as any others, and as newbies, they would seek their colleagues for models of what to do. They did not have senior tester colleagues, and within a year, they became programmers who don't question the customer requirements and who can't identify that the software isn't working or delivering as much value as it could. When they sought training, they took same kinds of training as the rest of the team. When they looked for idols, they never found the testing idols. They never became testers. 

I started off with a strong tester identity. I learned from Cem Kaner, James Lyndsay, Elisabeth Hendrickson, James Bach, Michael Bolton - people my programmer colleagues have never heard of. It became important to be a tester, because with that label I found hundreds of people to learn from. The tester communities are powerful. The oppressed are coming together and helping each other. Together we're strong. My programmer colleagues have never experienced the level of community I've lived through in the last 10 years. 

The community has negative aspects. I don't like the fact that we bash programmers, but the communities are often places where we let out steam to find constructive solutions to hard problems we're facing at work. That don't always feel safe for programmers to join and I'd love that to change. I'm all for diversity, testing over testers. I don't like that we're so strong in defining the right words and thoughts, but I try to dismiss that to work on something more productive - failing regularly. And some general behaviors leave people feeling unsafe, even with the tester role. 

I believe the labels are important when we start. The labels help us find our peers and communities to learn from. They bring us together. But as we grow, we need to actively break out of our boxes. 

But you do realize there is, easily, a full 20 years (and more) of intensive study on just to do great testing? It's not a thing you get in a few years. If you are everywhere with your skills, you are nowhere yet. Less than 5 years of experience for half of us - we should start from different corners, and just work together to bring the knowledge to paint a fuller picture as a group than any of us have individually.




Thursday, May 12, 2016

Bags of Tricks

When you give an idea a name, you start a process of figuring out what it includes. In an Agile Coaching Circle in Helsinki, the instructor introduced their bag of tricks: things he often does as an agile coach, materials and activities of all sorts that sort of are his fingerprint - this you can expect of me.

On my way home from a meetup in a different city (3,5 hours train ride away) today I started to think about my bag of tricks and what it includes and does not:

What I have in my active bag of tricks - stuff that comes out without effort:

  • Presentation karaoke. I often bring out a "battledeck" and get people doing a friendly version of  this. There's no battle, other than within yourself. Random topics collected from the group, 5 random slides from a collection, with control over the timing and GO. 
  • Software to test. I have different apps I often whip up for a session of testing. And applicable documentation to test some parts of those. This piece seems to be one that is growing most in my tricks. 
  • Group learning activities. Getting people to mob or pair on tasks, or work on ideas to put them together in some sort of synthesis.  Running a retrospective in a few formats. Running Lean Coffee discussions. 
  • Games. I have a few I use on the games front, like playing 20 questions and learning strong-style pairing with a phone exercise.  
  • Stories.  There's stories I seem to be telling again and again. There's others that get forgotten. I need to make my stories a better part of my bag of tricks. 
  • Cheat sheets and summary sheets. I often notice myself going for Elisabeth Hendrickson et al's Cheat Sheet or Bach/Bolton's Exploratory Testing Dynamics or Michael Hunter's You Are Not Done Yet -checklist or Cem Kaner's Taxonomy of issues from his Testing Computer Software -book. These are distilled ideas over explanations. 
  • Common testing answers. There's things I say with autopilot, like responses to claims like "Everyone can test", "No user would do that" and questions like "Why didn't you find that bug?" and "Do testers need to be also programmers?"
  • Recruiting new speakers. This is my go-to topic whenever I'm struggling with social anxiety or feel the need of getting to know people. From finding out their topic to finding a place to share the topic in, it's an area of tricks. 

What I don't have in my active bag of tricks - ideas for extending:

  • Videos.
    Well, I have videos. I show one specific video often. But mostly I don't like showing videos. I'm too impatient to watch videos. Videos exist so that you can watch them without company. So I have my hangups on videos and thus I don't carry them around in my bag of tricks.  
  • Jokes.
    I feel I suck at telling jokes. Even worse than telling a joke is the idea of telling the joke again. 
  • Agile games.
    There's a whole bunch of games and exercises I've experienced that are useful ways of showing things. I notice myself often thinking I should activate (facilitate some myself) the knowledge of other people's simulations. 
  • Testing games.
    These are mostly things I've learned from James Bach and Michael Bolton. I find it uncomfortable for me to run these. Like getting people to play the Beer game (I just talk about it existing), or getting people to play the dice game. 
  • Article references.
    I rarely give people articles to read more on the topics on. I more often reference a book than an article. 
What kind of things are in your bag of tricks? How could we share more of this stuff? 

Monday, May 9, 2016

Risk-based testing in the agile ways of working

I'm inspired to think about risk based testing. I've been working with a lady from Finland with 15 years of deep experience (she's brilliant) who is about to do her public speaking debut in a few weeks and I can't, for the life of me, understand why she's kept to herself so long. She's about to do a great session on her experiences on risk based testing, growing from the telecom world to the safety-critical devices.

With all the good advice she had to give in delivering a trial version of the talk, something left me thinking. And the something is that it is so clear and evident that we live in different worlds, and I don't miss her world one bit. The difference is agile.

Many of her experiences talked about not having enough time and thus needing to prioritize based on risk. Her later experiences talked about safety-critical making time for testing all risks classified serious. And her later experiences resembled my world a lot more.

With agile and continuous delivery, I have all the time I need, with risk-based assessment of where time is useful in testing, to test against whatever information I feel I could or should provide. The natural fall-offs tend to be types of testing that require special skills (performance & security in particular) and there I feel we don't do all we could/should. But if they were just tasks over major learning efforts, they'd be included.

Risk-based testing is no longer a technique for me, it's an overarching principle. Find information that matters - that's risks. Find first information that matters for schedules. Find then information that matters for all kinds of stakeholders, with less schedule impact. Big impact on development first.

It's like playing a game of nightmare headlines. Writing all the bug reports (in my head mostly) that I never want to write in real life.

I struggle much more with my risk blind spots. The need of giving myself chances of learning that all my brilliantly crafted lists of ideas to do and things to test are still incomplete. And that new things that emerge may actually be higher in priorities with the idea of minimizing the late impacts.

Then again, with agile, the impact profile is so different. I remember Vasco Duarte talking about this, so I went and googled what he's repeated for the years I've known him work on agile.
Agile is a game changer. I wonder why we write so little nowadays of risk-based testing - did agile change it so that it needs a relevantly different way to describe it? Or is it just that we understand that risk, just as exploratory, is just a word that describes any good and skilled testing? 

Thursday, May 5, 2016

Mobbing and competing solutions

With a group of people working together on Mob Programming, there must be moments where more than one person has ideas of what would be the Right Thing to do.

With Mob Testing, I see these as ideas of where the bugs might be hidden. In training setting, I often stop people from following the ideas to keep the group together, and just park ideas actively in the mind map for future - that in training often never comes.

The rule of thumb on mobbing would be to approach competing ideas of solutions with the "do both" approach. At Mob Programming Conference this week, I had just the right opportunity to live by this rule.

In the open spaces, we were trying out mob writing an article. We were trying to describe mobbing for the inexperienced through describing the mobbing we were doing right now, and the line of thought from people who do not mob lead me to an idea of needing a metaphor.

My choice of metaphor and how it came about

After my talk at Agile Testing Days Scandinavia, a question that left me thinking (and discussing with Llewellyn Falco) was about the difference between Mob Programming and Coding Dojo. In both, we have a group. In both, we have a rotation. In  both, we work together on a problem. Mobbing grew out of Randori (coding dojo), so is there a difference?

At the conference, my response was that the difference I've seen was the style of navigation. With the rule of "an idea from my head to the computer must go through someone else's hands", the dynamics of the group changes from watching a pair (coding dojo) to group of navigators channeling stuff to the computer (mob programming).

Llewellyn was not completely happy with my answer, as mob programming as he sees it, isn't really defined by an individual mechanic such as style of pairing. He introduced  metaphors to think about the difference: Is a swimming pool just a bigger bath tub? Clearly not. Is there a difference between a first date and a long-term relationship? Clearly yes. The groups that grow together through mob programming are essentially different than groups that just start the mechanics of mob programming together. The level of trust makes it a whole different ball game.

Navigating the idea to action

Instead of explaining all this to my fellow mob writers in the session, I navigated words onto the text file we were creating. As soon as I was getting the first part of the idea out that us as first time group were more like a first date over an established mob as we were a group of strangers coming together, the others reacted strongly against this. I was not allowed to finish my sentence (so much for kindness, consideration and respect...)

Since we were writing a metaphor, another navigator offered a competing metaphor. What we were experiencing was more like already being experienced in driving a car, but needing to move into a massive truck changing from programming to writing. Being a proficient writer, I naturally disagreed with this metaphor.

Both of us raised our voices ever so slightly. Both of us started talking more about the metaphors, willing to clarify them in many more words. No text was bring written. So I remembered the rule of thumb: Do both.

I had been in half-sentence with my thought, and it was not going anywhere even if I did not finish. I stepped back and proposed to work on the eccentric to me idea, just to see how it would work as text.

The one with the idea navigated the words on paper. As he was finished with his chapter, I no longer cared about my idea - it wasn't any better in terms of clarifying. His words were there and we moved on to describing the experience of choosing what to write on by just writing and delaying the commitment.

Review for consistency & correctness

With the article done, we took upon ourselves to mob with reviewing for consistency and correctness. And in this reading, we ended up deleting also the other metaphor. It left bits around it, that we refactored into something that made more sense thinking of our audience. Those bits around were valuable sentences of ideas, ideas that would have not ended up on paper if we had fought about ideas over the implementations.

What did I learn?

I learned that

  • Mob programming gives me a useful heuristic to step down on a creative disagreement: do both
  • Doing 'bad ideas' generates new ideas that wouldn't emerge if we just argued
  • Neither our level of trust as a team or the difference of programming vs. writing mattered for the point we wanted to be making
  • I dislike long release cycles: we did not  publish the article right away, it will come out in InfoQ weeks later. So I end up writing a different view into the same experience a lot before our shared experience gets out. 



Product Owner Dysfunctions

Back in my very first agile project in a slightly larger company, we were struggling with deciding what goes into the product next. Like in so many places since, the idea of a product owner was central. I remember phrases like "single wring-able neck" to come out often and be particularly awful. We were always on the lookout for that one magical person who could make all the hard decisions under conditions of extreme uncertainty, someone who was paid enough to be responsible for her decisions. That never really worked.

I remember one time having this wonderful meeting with the five big bosses of the five different business lines on sorting out what goes into the top of the product backlog. They all had enough on their plate of needs to fill the whole development pipeline, and little interest to  give up on their own needs. It was not a collaboration, but a negotiation and it wasn't going so well.

A particularly funny experience was when we tried giving them visual tokens of how much they could contribute to decisions, it was some form of a dot vote.  Everyone voted their own, everything was equal. But then, the game balance was broken by giving one token to the person outside this decision group, and with one token she could tip the balance to choose whatever she wanted. And all of a sudden, the tone of discussions changed into trying to find more commonalities on what the needs of each business area were.

This seems to be a recurring problem for me. I hardly ever have a product owner who could actually make the decisions for the different stakeholders in a balanced way - the team ends up helping with that work and I find that it's great, adding value to that discussion. But I'm still on the lookout.

What is the best way you've found, when relying on a single source of truth is not available, to balance the needs of five major groups of stakeholders? My magic lies within making the batch size smaller and giving each a turn. We can do it all, but not just today. If you have to wait a week to get something of value, it seems to be much better than six months.

Wednesday, May 4, 2016

Six months into the speaking year 2016, what changed?


When you blog about your goals and ideas in public, it also serves as a foundation to realize that things did not go quite as planned. I wrote a blog post about my 2016 speaking goals in November, and six months later, it is clear it turned out different.

Give a high-profile keynote 

With me, high profile means some of the known names of testing conferences that do keynotes. I'd say it is safe to say by this time of year that I will not be keynoting in a high-profile conference this year.

Then again, I can redefine high-profile to mean some of the great opportunities I've had. I was invited to speak at Agile Testing Days Scandinavia, and the audience reactions (questions, discussions, learning - love it!) were so worth it. I did a paired talk with Llewellyn Falco in front of a large audience at Agile Serbia. And it looks like autumn will include an invited talk (even if not a keynote) in a testing conference I adore, and a keynote in a new conference.



Publish 2 talks as videos online that wouldn't happen in conferences 

I've published a couple of short videos on my own YouTube channel, and contributed three webinars in the community. I have a backlog of talks I'm not sure where I will do that just need to  get out, and webinars with the community players seem like great opportunities. I've also been toying with the idea of just doing a webinar series of my own, or starting a podcast to feed the need to learn through sharing.

Less talks at conferences, just 3 (scheduled 2 already) 

I promised myself I would go out less. I did, but not in this drastic level. From 33 sessions in 2015, I'm now committed to only 20 = 16 + 2 +2 (public + agreed + in discussion). I'm doing slightly better on not paying to speak - only four of those. But that is four more than I told myself to do. I decided to pay for speaking at Devoxx UK (time to try my developer-conference wings),  at Agile 2016 (family reasons to be there for a week), at a test conference in Latvia if they'll have me (supporting the community) and at Mob Programming Conference (long story).

Some workshops at conferences, as paid work (at least 1) 

With this goal, I'm there with the wonderful TestBash pre-training. I had so much fun. And I'm learning that breaking into this circuit is a long term game, many have already all the sessions for next year laid our already.

Coaching 2 new speakers - finish one in progress and start one new

I've coached a lot more than 2 new speakers this year. I think I'm at 7 now. The ones in progress stick with me and come back with new ideas to review and that is wonderful. And new ones emerge both through Speak Easy and directly.

I've also taken my 1st mentee who I teach testing pairing with her regularly. She can teach me automation, I can teach her exploration. It's a win-win. And the world will end up with two more awesome testers than what either of us could be individually.

NEW GOALS

I'll just say I'm changing my resolution here. I'll be going around to places where there's awesome people to connect with. Speaking is much easier for me than mingling and smalltalk without speaking, so I just need to speak to overcome my social awkwardness of assuming people might not want to talk to me unless they come to me first.

Speaking is not about status, it's about learning. There's no better way to learn than to share and invite people to help you learn more. Try it and let me know if I can help. It's always a win-win. 

It's like seeing yourself in a mirror!

There was a problem in production. A feature could be misused so that one user could change the data for the other, and the two users were taking daily turns in correcting things manually to their liking until they came to a point where they decided both to complain that the software is broken because their data only sticks for a day.

Surely, that is not what the users are intended to be doing. They just did not understand that this particular piece of data is shared - how could they, when the application tries actively not to share most of the data and doesn't make a clear distinction on this type of data is different.

I've mentioned this when I joined four years ago. I've mentioned it and negotiated for it to change, regularly. But it took four years before it caused any trouble that would reach our ears.

As the trouble emerges, our product owner is quick to admit that things shouldn't be as they are. But that since they are, a quick fix of making the editing of that data to be only available in special settings could be done. The team looks at the quick fix, and confirms it is indeed quick.

At this point, however, someone decides it is good to include the UX designer into the discussion. She immediately sees that the quick fix isn't really improving usability, and comes up with a slightly more complicated design that would take things forward.

The quick fix of half a day turned into a week of work. The fix that was needed immediately was postponed from tomorrow to happen in a week.

The emergent decision starts to bug me, and I question the bundling, asking for delivering first the quick fix and only later the more complicated design and it's like I'm looking into a mirror: I'm faced with harsh, emotional arguments about the reason why it must be bundled. Because there is so much more other more important work to do, that without bundling it will not be done. The current me looks at the situation, and assesses that if that was the case, then it just shouldn't be done.

Situation escalates with the emotions, and I feel even more like facing a mirror of a younger version of myself. The spirited, committed fight for quality as you believe it should be: users are the key! The need to feel gatekeeper to protect them from bad, even if blocking some of the good. The despair that a better future where things will later be fixed will never come. And I respect seeing the mirror, as it points out how I've changed.

It's not that I would have given up on my bright-eyed strive for indefinitely better, but I've grown to accept that delivering something today, small batches, is the best thing we can do. The other batches will come. There needs to be a balance of improving UX, improving maintainability, and adding new features. When each is done is small batches, we can do all of them without completely stopping any. And we can also completely stop any, for having more items of another kind.

The mirror also reminded me, that when I joined as the only tester of this team, the bugs I could find generated so much work that the whole team was needed on fixing them. Many of the bugs had already been in production, and seemingly the users never cared (or rather, had not cared by that time). It took more patience than what I believed was in me to balance some of the fixing with the features product management would wish to see.

It must be just as painful to have ideas of how to fix things for user interface but not have the ability to take through those ideas yourself or in full steam as it is to report any issues (including UX) as you test but can't fix yourself.

Plans and feedback don't change the product for the better. The fixes and changes do. So yes, there is such a thing as too much feedback. Too much too soon, when you've piled up some backlog.

In four years, the simple functionality bugs have vanished. The UX bugs are under construction. And some of them, including this one really, requires changes in logic way beyond the user interface to really go for the experience we look for. In small batches of forward-driven value, flowing continuously, we'll get there. And learn some more while on route.





Tuesday, May 3, 2016

What does 10x look like?

On May 2nd, there was a keynote at Mob Programming Conference with a message I need to share with all of you. You know the idea of some people being 10x more productive? Here's how I did not realize to think of it.

If we're running two shops, each with overhead of $95,000 and one makes 100,000 ($5K profit) and the other one makes $145,000 ($50K profit), the latter would be 10x the prior. We do this calculation quite easily and assess the numbers intuitively. No problem there.

But when we start looking at numbers where there's interest and compound interest, we get fooled with the numbers. A $100,000 loan with 10% interest rate is $839 / month if paid in 50 years, and $2124 / month if paid in 5 years. Again, 10x.

The same $100,000 loan with 100% interest rate is $8,333.33 / month if paid in 50 years, and $8,402.31 / month is paid in 5 years. Again, there's the 10x difference.

The 10x with compound interest - like with learning and improving yourself - does not look like you get 10x more done today, but today it might actually look very much the same as if you we're not investing in learning. Compound interest hits you over time and makes the 10x difference. And instead of thinking of debt and loans, we could look at learning that makes us gradually better.

The speaker calculated that what if we spent an hour learning every day to get a 1% increase, how long would it take before the investment paid itself back? That would be 28 days.



With these numbers in mind, the quote shared from AppFolio blog on the company trying Mob Programming sounded especially funny and exemplary to his point of us not recognizing 10x when it is in front of us.
"Unsurprisingly, the team did not achieve 10x productivity. In fact, we found our productivity to be almost the same as it was before…Your mileage may vary, but as far as we’re concerned it’s a resounding no.  
Is the product higher quality? Is there better test coverage? Is the code idiomatic and does it follow best practices? Are the chances of a bug crawling into the product minimized? From our experience this is the most emphatic yes of all the concerns listed above. Not only does having everyone together increase accountability and awareness, but mistakes that may be made by more junior developers are more likely to be caught. Furthermore, when our QA engineer was in the mob, he gained a much better sense of how to go about testing the feature as thoroughly as possible."
The latter chapter is what 10x productivity looks like in software development. Hitting the mark better. Catching the  bugs better. Working together better. We've been notoriously good in this industry at declaring done early, and separating the finalization task from the original creating an attribution error.