Thursday, April 19, 2018

Where Are Your Bug Reports?

Yesterday, I put together a group of nine testers to do some mob testing. We had a great time, came out a shared understanding, people's weak ideas amplified and people standing together in knowing how we feel about the functionality and quality, and value for users.

This morning, I had a project management ping: "There are no bug reports in Jira, did you test yesterday?".

Later today, another manager reminds over email: "Please, report any identified bugs in Jira or give a list to N.N. It's very hard to react/improve on 'loads of bugs' statement without concrete examples.". They then went on informing me that when the other testers run a testing dojo, they did that on a Jira ticket and reported everything as subtasks and hinted I might be worried about duplicates.

I can't help but smile on the ideas this brings out. I'm not worried about duplicates. I'm worried about noise in the numbers. I have three important messages, and writing 300 bugs to make that point is a lot of work and useless noise. This is not a service I provide.

Instead I offered to work temporarily as system tester for the system in question with two conditions:
  1. I will not write a single bug report in Jira, but get the issues fixed in collaboration with developers across the teams. 
  2. Every single project member pair tests with me for an hour with focus on their changes in the system context. 
Jury is still out on my conditions. I could help, but I can't help within the system that creates so much waste.  I need a system that improves the impact of the feedback I have to give through deep exploratory testing, focused on value.

I'd rather be anything but a mindless drone logging issues in Jira. How about you?
 

Tuesday, April 17, 2018

Interviews Are a Two-Way Street

We were recruiting, and had a team interview with a candidate. I was otherwise occupied, and felt unsure of hiring someone who I had no contact with, especially since the things I wanted to know of them were unanswered from my team. We look for people strong in C++, but also python and scripting, since a lot of a DevOps type of team's work ends up being not just pure homeground. So I called them, and spent my 10 minutes finding out what they want out of professional life, what makes them happy and making sure they were aware how delighted my often emotion-hiding team would be if they chose to hang out with us and do some great dev work. They signed. And I'm just as excited as everyone was after having had chance of meeting the candidate.

So often we enter an interview with the idea of seeing if the person is a fit for us. But as soon as we've established that, we should remember that most of the time, the candidates have options. Everyone wants to feel needed and welcome. Letting the feeling show isn't a bad thing.

There's a saying that individuals recruit people like them, and teams recruit people that fill the gaps - diverse candidates. For that to be true, you need to have first learned to appreciate work in team beyond your own immediate contribution.

All this recruiting stuff made me think back to one recruiting experience I had. I went through many rounds of checking. I had a manager's interview. Then a full day of psychological tests. Then a team interview. And finally, even the company CEO wanted to interview me. I required yet another step - I spent a day training testing for my potential future colleagues, in a mob. Every single step was about whether I was appropriate. If I would pass their criteria. They failed mine. They did not make me feel welcome. And the testing we did together showed how much use I would have been (nice bugs on their application, and lots of discipline in exploring) but also what my work would be: teaching and coaching, helping people catch up.

Your candidate chooses you just as much as you choose the candidate. Never forget.

Saturday, April 14, 2018

Second chances

"This does not work", they said. "We used to find these things before making a release", they continued. I see the frustration and understand. I feel the same. We lost an exploratory tester who spent 13 years with the application, and are reaping the results as they've been gone for a month. Our ways of working are crumbling in ways none of us anticipated.

We lost the tester, because for years they got to hear they are doing a bad job. How they are not needed. How they would only become valuable if they learned automation. And they were not interested. Not interested when the personal managers told it. Not interested when most conferences were full of it. Not interested when articles around the globe spouted that the work they were doing was meaningless.

They found the job meaningful. The team members found the results meaningful. And it was not like the manual exploratory testing they did had stayed the same over the years. As others in the team contributed more automation, their testing became deeper, more insightful, targeted on things where unexpected change was the only constant.

I reviewed their work long before they decided on leaving. I promoted the excellence of results, the silent way of delivering the information to make it visible. And when they decided it was time to let go of the continuous belittling, I was just as frustrated as anyone in the teams that lack of appreciation would lead to this.

Just as they were about to go, we we found them a new place. And I have a new tester for my own team. The very same tester who elsewhere in the organization wasn't supported is now my closest colleague. I got a second chance of helping non-testers and non-programmers see their value, for them to feel respected like I do.

And for that I feel grateful. I already knew my manager is a great match for me in my forward-thriving beliefs of building awesome software in collaboration with others, valuing everyone's contributions and expecting daily growth - in diverging directions. My good place - my own team - is again even better with a dedicated manual exploratory tester with decades of deep testing experience.

Wednesday, April 11, 2018

Task assigning does not teach self-organization

I was frustrated, as I was ticking away mental check boxes on the testing that needed to be done. It was one of the last tasks of a major effort so many of us had contributed on for the last 6 months. The testing I was doing wasn’t mine to do but I had agreed with our intern that this would be work they’d do. Yet I found myself doing it, after 3 days of pinging, reminding and explaining what needed doing.  The work I was doing wasn’t just something I expected from them, but as I was doing it, I learned they had also skipped my previous instructions of utmost importance.

As I completed the task, I shared the status to our coordination channel. Next up was a discussion I wasn’t sure on how to have on missing the mark of my expectations big time.

My feelings are a thing I can hardly not let show, and I approached the discussion letting my frustration be visible, and using my words to explain that I wanted to understand, and that failing wasn’t something I’d punish on, but something we just had to talk about.

We learned three things together.

For not doing an important thing I had reminded them on multiple times, there was a clear lack of understanding why I considered it so relevant. Discussing the big picture of risks, I’m sure that particukar thing gets done the next time. The intern expressing frustration on boring and repetitive task lead us also into identifying the root cause, and they volunteered to drive through an organizations fix - while not dismissing my instructions on remedies while the fix was not in place. I was delighted on the show of initiative.

For not doing the testing I ended up doing, I learned they were overwhelmed with number of requests all around, and the problem was a result of misprioritization. They were spending their time on a pesky test automation script, while the real priority would have been to complete the testing I just did.

We also reviewed the testing I had done, to realize they would not have known what to do. The task was one with many layers, dependencies to what happened while testing, requiring end to end understanding of business process and a perspective into lifecycle. All this was obvious to me, but they had worked on simpler tasks before. We had now identified a type of task that stretched too far.

We ended with celebrating how awesome this story of our mutual learning is and agreed to work on intake of complex work differently next time around.

As I mentioned the experience to a colleague, I was told their preferred way of dealing with this is Jira tasks with clear instructions. That’s what they were doing to me and learning I never obeyed. Others did. The discussion made a belief system thing visible: I was building each of my colleagues for their best future self as contributor. My colleague was focusing on how to get the work done with existing limitations.

Their style gave results where everyone did a little less than what they asked. My style gave results where we occasionally failed and reflected, but I could always assume a bit more next time around.

Assigning tasks wasn’t growing people. Quite the contrary, it created an environment where people consistently underdeliver to a standard never reaching their potential.

It takes past better experiences or exceptional courage to step to self-organization when you feel the organization around you just wants you to focus on assigned tasks. I’m lucky to have past experiences that allow me to never obey blindly.



Promoting the Air

Last week, I tweeted a remark:

The irony was that after 1.5 years at my current company I did a teaching women Java thing, and that was something people were excited about to an extent that they wrote about it (interviewing me) for the company blog. Meanwhile, I do 30 talks on testing/agile, most international, a year and none of that has crossed the news bar.


I talked further to many colleagues in testing, and came to the conclusion that we are in an interesting situation where our profession is very valued within teams, where a lot of managers are "helping us grow" by pushing more automation even by force and anyone outside the teams have increasingly skewed perceptions of what we do, and why.

We are like air we breathe. Invisible. As soon as it is lost, we notice. And we lost some of our testing very recently so now we are noticing.

This all leads me to think of  being the change I want to see. So instead of dwelling in my frustration of irony, I took the observation and promoted the other things I do. The amount of empathy and understanding sharing my frustration has been overwhelming. And the constructive actions of wanting to hear more, wanting to share more equally and learn about this stuff has been delightful.

We who see what testing is, appreciate it, need to talk about it more. We need to help others see and remember the invisible. Every single tester shares their part of promoting. Some of us get our voices heard just a little further, but our common voice is stronger than any individual's.

Thursday, March 29, 2018

A Developer's Idea of Exploration

"We did a neat thing exploring today", a developer exclaims. I look, interested, wondering what is the source of excitement this time. It's not the first time they've been excited about doing things clearly very close to my heart. But a lot of times we find our ideas of exploring take very different, yet fascinating turns.

"We did this combinations test", they explain. "We took bunch of values when we did not feel like thinking too much, and passed them all in, and created combinations", they continue. "We learned about behaviors we did not think of", they finish. And we agree that is wonderful. Learning while testing, appreciating the new information, absolutely something we share.

There's been little remarks like this coming my way a lot recently, and while I can share the excitement of learning something we did not know, I also find that  the ways of going to "as close to exploratory testing as I usually do" as a developer isn't quite where my exploration is.

There was a session about property based testing, and generating test cases to run through same partial oracles. Just doing it wider does reveal things of unexpected nature, especially when you have a way of identifying some relevant aspect of correctness with a property.

There was an exercise of creating combinations for a 3 variable method, finding out the application does not work as specified on its boundaries. Just having more cases easily available and visually verifiable revealed information of unexpected nature.

All the three examples I've had recently, are ways of programmatically doing more. While they uncover relevant information, there's still more to exploratory testing.

This makes me think back to exploration of someone else's web app we did in a mob yesterday evening. Just some things I remember us learning:
  • For a system aiming to enhance datasets to acceptable, it makes no sense that when there is a condition preventing the dataset from ever being acceptable, we would first need to fill in some info when the condition of rejecting exists without any additional info (a problem in the order of tasks in the process)
  • For uploading files, we'd like to be able to use explorer over just drag-and-drop. It matters how most others do things. 
  • When a user interface view would include many things to show in tables, having relevant tooltips for some but forgetting placeholders for others less obvious creates confusion. 
  • When you must choose of of two but not both, automatically emptying the other field isn't exactly the best way to present that. 
  • When rejecting an input, logging it might be useful.
  • When failing so that there's a big visible error in the log, it would be very nice if that error was made visible also for the user. 
  • When having a recurring element of guiding users, filling it in three different ways makes little sense.
  • When you can get to a functionality with one click as there is just one option, hiding it in a menu requiring extra click won't be helpful. 
None of these would have been found by the "let me just play with my unit tests" approach to exploring. Then again, none of what we did would have found things that approach could find.

It's not this or that, but this and that. And it's lovely when developers show ideas of applying the same ideas with the tools at their hands. I hope to get to experience a lot more of it going forward. 

Tuesday, March 27, 2018

The Test Automation Trap

There's a pattern that we keep seeing in "agile" projects again and again.

We work together as a team to implement a feature. We automate tests for that feature as part of its definition of done. As end result, we have some more tests than before, on all layers of tests. We get the tests run blue and we make a release.

We work together to implement a feature. The previously added tests make our tests run in all lights of a christmas tree, and in addition to adding the new tests for new functionality, we clean up the previous tests.

The longer we continue, the worse the christmas tree lights get. The more time we spend on fixing the past tests, the less time we have on the new tests. And we take shortcuts on our past tests fixing, just removing the ones we deemed so necessary before.

And no one talks about it. It is a ritual that we must go through. Like a rite of passage.

Over time no one cares about how well the automation tests things. All we care for is that it passes for us to get through the gate.

I've seen so many people trapped in the cycle of being too busy to think about *why the tests exists* and *what value are they really giving us*. These people have no time for manual testing, because - very honestly - automation eats up all their time. And they might not even see that the approach is not really working out for them.

The test automation trap creates testing zombies. Ones that make the moves, but that stopped learning on what they're doing.

The best way I know out of the trap is to start caring about testing again. Put testing, not the scripts, into the center. It's time to talk about risk and strategies again. It's time to build up a test automation asset that supports whatever strategies you're going for. Stop moving through the motions, and think. Learn. Look at where your time goes. Experiment your way out of the trap of magical moves that feel better idea than they are.

Thursday, March 22, 2018

The tester work asymmetries in team

In the organization, I work with a team. I sit in the same room with this team. I use a shared label to identify our togetherness. We go through rituals together: planning, working, delivering, demoing, and improving. There's just enough of us so that we can do relevant things, yet not so many that coordinating our work would be a problem.

The best of these kinds of teams work together over longer time, and on problems they can feel ownership on. That's where my life gets complicated.

The wonderful little team I work with, works in an organization that works with the ideals of internal open source project. The team has no nice list of microservices they'd be responsible for, but anything and everything anyone has ever created into the overall system is up for grabs.

As a tester in a team like this, I find it fascinating to look at how people approach the problem of modeling their responsibilities differently.

One tester seems to model their actions on the team's developers actions.  If a developer goes and changes something, a tester follows and helps tests the change. A lot of this activity happens by the developer pulling someone other than themselves to implement automation.

One tester seems to model their actions on the end to end flows of the system, from the perspective of mention-worthy functionalities being introduced. None of this activity happens by the developer pulling people in, but the tester pushing ideas of seeing value in the system perspective.

One tester seems to model their actions on collecting any work anyone would wish a tester would do. Whatever needs doing and looks like being dropped by others ends up as things they do.

Explaining and understanding where the time goes and what activities belong with "a team" can get very complicated. I guess it also makes sense that its also a high trust environment, where doing is considered more relevant.

Tuesday, March 20, 2018

Working with all levels of ignorance

There's a view of the world of testing on the loose, that I don't really recognize. It's a view driven by those identifying primarily as developers, and it looks at testing as a programming problem. It suggests that we already know what we know and that testing is about keeping tally of that again and again with changes to the applications we're testing.

It is evident that I come to testing from a different place. I approach it primarily as an exercise of figuring out things we don't event know we don't know through spending time and thought with the applications we're testing. I expect to find illusions to break and show how things for real are different that we imagined they should be - in relevant ways.

I think of it as a quest for four types of information.
1) Known knowns - things we know with certainty
2) Known unknowns - things we know with caution
3) Unknown knowns - things we forget
4) Unknown unknowns - things we ignore

So many times over years, I've been fooled by the unknown unknowns, my own self-certainty of my analytical skills, and lack of focus on the serendipity nature of many of the bugs. But even more, I've been around to save same developers, again and again from their self-certainty of their analytical skills and complete ignorance of information beyond what they already remember.

The idea of orders of ignorance is powerful. And as a tester I come to testing much more from the ideas of not knowing at all what I don't know, but a keen quest to keep experimenting until I find out.

When I draw the image some years back, I was trying to find imagery related to building houses. We know with certainty things a house needs. A house without a door, window, or roof wouldn't be much of a house. Yet we even with things we know for certain, we can end up with different expectations because what one of us thinks is certain, another takes as a mere suggestion. We also know with caution what a house needs. Like when we know it needs windows, we might not know the exact shape, number or position of them, but we certainly know we need to figure that out. With a house sufficiently complex, we start forgetting some of the nooks of it and need to rediscover what has become lost. And there's things we just completely miss out on, that could end up shaking the very foundation of what a house should be like.

Thinking back to a particular example of testing a remotely managed firewall, it is also possible to map activities I came across. I knew that if I introduce a rule remotely, it is supposed to show up as rule locally. I knew I did not know if there was a rule name limitation, so testing for it made sense. I knew I had created rules before using the local UI and very short names were allowed, and trying it again reminded me on as short names as single character. Yet when using a single character name remotely through an API, I witnessed completely unexpected performance issues resulting us in forcing a 3 character limit for stability reasons. All levels of ignorance were in play.



Sunday, March 18, 2018

The Lure of Specifications


There's a fun little exercise from Emily Bache called Gilded Rose. The exercise is intended as a piece of software to extend, and naturally you'd want to have tests before you go on changing it. Coming to it from a more pure testing / tester perspective, my fascination towards the exercise is on how people end up modeling the work.

Gilded Rose makes available:
When setting up the exercise, I hand people the spec, and create a combination approval test with the sample unit tests scope that is easy to extend with new values.



The question goes as so many times before: how would you test this?

Given a specification, most people jump at specification. As first values based on the spec get added, I usually introduce the idea of seeing code coverage as we are adding tests, and some people pick it up others don't.

This particular exercise lets me model on how people connect three types of coverage when testing: covering the spec, covering the code, and covering the risk.

The better ones have
1) Refused to follow the spec step by step, because someone else must have done that already
2) Thought of ways to test that neither the spec or the code introduce
3) Not stopped testing at covering the spec when code coverage is still low.

There's something about a specification that drives people's focus, making them less likely to see other things without added effort. Sometimes, it might make sense to step away from the lure of specification's answers, and see if the answers you'd naturally get to make any more sense.


Sunday, March 11, 2018

Building a relationship with developers

At a conference talk, I again haphazardly shared the idea of not writing bug reports. I call it haphazard, because I my talk was about increasing your impact as a tester, and it was just one of the “try this” solutions I shared. But it is one that rocks people’s world and beliefs, makes them approach me with disbelief and even come off as attacking me for having an experience and sharing it.

In all these interactions, I found a lot of value for me, in recognizing how the environments I habitate and set up things can be different. While “stop writing bug reports” is the thing I say, what is really behind that is the idea of starting to pay attention to cost - value structure of your work, with particular focus on opportunity cost. Each one of us has more power to decide on what we do and how we do it than we realize. If we are asked to execute preplanned test case and a manager asks us which ones did we execute at the end of each day, we are more constrained that I believe great testing should ever be. Yet even in that setting, we can choose the level of focus we exert on each of the tests. We can add emphasis on some, quickly browse through others and add our own ideas in between the lines. If we book a meeting with ourselves for an hour to practice using tools and approaches that don’t fit into our normal day, most organizations don’t even notice. And instead of asking permission for this all, think of it as possibility to ask for forgiveness - but only if it is needed. 

The environments I habitate are essentially different. It’s been years since anyone told me specifically what I need to do, and what is my next task. Even the constraints that appear to be in place may not be real. But what is very real is the sense of responsibility and continuous value delivery. I know what good and better results could look like. And I know I don’t know how to get to the best results, without experimenting. 

So I experiment with stopping bug report writing. I end up working my first year in an organization, where on another business line a colleague is being scolded for lack of “evidence of value” saying they don’t write enough bug reports and since they don’t automate or review automation, they are not visible in the pull requests either. The number of Jira tickets I raise in the whole year can be counted with my two hands fingers. Yet the number of issues I find, address and get fixed is different.

It isn’t easy to say to myself to take the harder route and go talk to the developer who could fix this, and seek actively a way to get a decision on it on the spot - it either matters (and we fix it now) or it doesn’t matter (and we fix it when we realize it matters coming back from the users). Delivering continuously supports - even enables - this way of working because you cannot leave issues of relevance around without them impacting the users, very soon. 

The not easy route is rewarding though. In those moments where I used to enjoy my private time writing a bug report I could be proud of later (that never warranted for such care and love for any other reason that being the evidence of “me”), I’m building a relationship with the developer that I need so that my work has real impact. I learn more in that interaction. I have chances of getting my message across better. And more often than without, the bug turns not only into a fix but a unit tests too, in that little collaboration we end up having. 

When I need to choose time to writing a bug report and time to communicate bug report in a way that creates a better relationship, I stretch for the latter. Not because it is comfortable (it isn’t, sometimes the reactions are downright mean) but because it makes us and our software better. 

Friday, March 2, 2018

Results from No Product Owner Experiment

Four months ago my team embarked on an experiment to change the way the team worked in an uncomfortable way. We called it "No Product Owner" experiment because it captures one core aspect of it. It was essentially about empowering a team to be customer obsessed without a decision proxy, in an organization where many people believe in finding one person responsible to be a core practice.

Four months later, the experiment is behind us. We continue working in the product-ownerless mode as the team's de facto way of working. The team is still very much an experiment within the organization, and our ways are not being spread elsewhere as we in the team like to keep our focus on the team, technical excellence and delivery.

Experiment hypothesis

We approached the No Product Owner suggestion as an experiment, as it had many things none of us had experience on. There was still the person in the organization that was hired to be the team's product owner. The team wasn't all super-experienced mega-seniors but a diverse group.

When thinking of the assumption we would be testing with this, we came to think of it as customer-obsessed team directly in touch with their customers performs better without a proxy. 

Better is vague, so we talked about looking at the released output from the team. Not all the tasks we could tinker on given the full freedom, but the value delivered for customer's benefit.

Happened before this experiment...

To understand what happened, there's some things that happened already before. We did not just talk about them as "grand experiments" that would be shared anywhere. They were just our way of tweaking our ways of working by trying out what could work, and not all did.

We had experimented with backlog visibility by using post-its on a wall in form of a kanban board, using all electronic kanban board, and not using a kanban board but a list of things we were working on within the team. The last worked best for us, we did not find much value in the flow, just the visibility (and discussions). We had experimented with the product owner location in relation to the team having him in the team room, and later on different floor. We had learned to do frequent releases, and through learning to do that stopped estimating and focusing on fast delivery of continuous value.

The frequent releasing, in particular, was the reigns of the team keeping us synchronized. The value sitting on shelf in the codebase not visible and available to our users wasn't value but just potential of it. It had transformed the ways we designed our features, and helped us learn splitting features always asking if there was something smaller in the same direction we could deliver first.

We also had no scrum master. At all. For years. No team-level facilitator, and our manager is very hands-off always available when called type with about 50 people to manage. 

Introducing No Product Owner

I blogged about the first activity to No Product Owner already months ago. We listed together all the expectations we had towards a product owner, and talked about how our expectations would change. We moved the person assigned Product Owner to a role we framed as Product Management Expert, and agreed his purpose towards the team was very straightforward: requirements trawling. He would sit through meetings, pick up the valuable nuggets and bring them back to the team for the team to decide what to do with the information.

Team embraced the change, and level of energy became very high. The discussions were more purposeful. We started talking directly to our sales organization, to our real customers over email and in various communities. We increased our transparency, but also our responsiveness.

In the first month, there were several occasions where the PME would join team meetings on a cadence, and express things in the format "I want you to..." to find themselves corrected. The team wants. The team decides. The team prioritize. The power is with the developers.

And our team of developers (including testers who are also developers) did well.

From High Energy to New Impacts

Before starting the experiment, we were preparing a major architectural change effort, and there were certain business critical promises attached to that change effort. As soon as the experiment started, we sat with sales engineers talking about the problem. An hour later, we had new solutions. A week later, the new solution was delivered. The impossible-without-architecture-change turned possible, understanding (and finding motivating) the real needs, and the real pain.

Throughout the experiment, I kept a diary of the new impacts. The impacts are visible in a few categories:
  • Taking responsibility of real customer's experience. We had a fix that is delivered in a complicated way through the organization's various teams, and we did not only do the fix like usually, but we followed through on the exact date the solution was available to solve the customers problem.
  • Fixing customer problems over handing the off through prioritization organizations. We hooked real users with problems directly to the people fixing problems. The throughput time increased, and we did fixes I can claim we would not have done before. 
  • Delivering customer-oriented documentation when there was a solution but it needed guidance.
  • Coordinating work across organization on level of technical details to increase the speed of solutions, removing handoffs. 
  • Coming up with ways of doing requested features that brought down the risk and scope of first deliveries, enabling continuous flow of value. 
  • Coming up with features that were not requested that the team could work on to improve the product.
  • Adding configurable telemetry to understand our product better in a data-driven way
There's two particular highlight days.

21 days into the experiment the team received feedback that  their latest demo was particularly good and focused on the customer value. When confronted with the feedback, the team considered it as "that's what we're supposed to do now" - we are customer-focused.

65 days into the experiment the team realizes the last appearance of the product management expert in planning was around day 55. There were other channels to maintain pulse of what might be important than the structure.

There was one particular low  or risky day.

40 days into the experiment the team reallocated 3/4 programmers and 1/3 testers to work on things outside the usual team scope.

Interestingly, the reallocation after 40 days took the already customer-obsessed developers and moved them to work on something where they could still implement the responsibility assignment. The subteam ended up representing the business like in the cross-business line effort without needing a matching role to the other business line's product owner. Progress on the tasks with the high motivation while feeling empowered has also been great. 2.5 months into a 9 month plan there is an idea that we might be done in 4 or 5 instead, while still bloating that effort with necessary improvements over following the plan.

Team Retrospective

After the 90 days period, we had a team retrospective with the ex product owner and talked about what changed. The first almost unanimous feeling was that nothing changed. Things flowed just as before.

The details revealed that there might have been change we did not appreciate.
We delivered about twice the amount/size of things of value as in the two previous 3-month intervals, all of them assessed after through discussions, not through the estimates.
We were more motivated, regardless of the team split that was temporary (even if for 9 months).
We did things we were not doing before, without having to drop things we were doing before.

I can now believe in magical things happening in very short timeframe. I couldn't before. Some of the things never reached us before to help us keep focus, and turned into big things that could never be done.

We did not magically have more people available. But the people we had available were more driven, more focused, more collaborative and believed more in themselves in their ability to take things through to customers.

Not using time and energy on estimating and the value of that became evidently clear with the taskforced subteam inflicted into an environment where estimates were the core. The thinking around opportunity cost - what else could one do with the time used estimating - became more clear.

Finally, we looked at what the Product Management Expert did. They reported higher job satisfaction and less stress. They reported they focused on strategic thinking and business analysis.

No one remembered any pieces of information the PME requirement trawling or the strategic thinking would have brought to the three months, so there is value potentially not delivered through to customer (or work wasted as it has no impact).

Improving the ways of connecting product management and RnD efforts is a worthwhile area of tasks to continue on. There may be a need of rethinking what and RnD team is capable of without an allocated, named product owner.

There was also some rumours around that I've really assumed the de-facto product owner role, but I assure I haven't. Things flowed just as well while I took my 3 week winter vacation, and spent at least another 2 weeks in conferencing around the world.

Every single team member acted in the product owner role. Every one. Including the 16-yo intern.

I couldn't be much more proud of my colleagues. It is a pleasure to change the world in our little way with them. Without a product owner.  

Grow your Wizard before you need them

Making teams awesome is something I care deeply for, so it is  no wonder that discussions I have with people are often on problems around that. Yesterday again I suggested pairing/mobbing at work to receive cold stares and unspoken words I heard in the last place I worked: "You are here to ruin the life of an introvert developer". I won't force this on people,  but they can't force me not to think about it or care about it.

As I talked about the reactions with Llewellyn Falco, he pointed out a story he has been talking about many times before. And with "just the right slot" in my calendar, I go and write about it. He will probably make an awesome video when he gets to it.

Some of us have some sort of history with computer games. Mine is that I was an absolute MUD (multi-user dungeon) addict back in the days, and I still irregularly start up Nethack just for nostalgic reasons. In many of these fantasy game types, we fight in teams. And we have characters of different types. If you play something that is strong in the beginning, you survive early on more easily. The wizards on low levels are particularly weak, and in team settings we often come to places where we need to actively, as a team, grow our wizard. Because when wizard grows to its high level potential leveling up with others support, that's an awesome character to have in your team.

A lot of times we forget the same rule goes around growing people in our teams. The tester who does not program and does not learn to program because you don't pair and mob could be your wizard. At least the results of being invited to "inner circle" fixing problems by identifying them as they are being made feels magical.

Just like in the role plays, you need to bring the wizard fully into the battle, and let them gain the XP, you need to bring all your team members into the work, and find better ways for them to gain experience and learn.

Pairing and mobbing isn't for you. It is for your team.

Friday, February 23, 2018

Assignments in Intent

We're testing a scenario across two teams where two major areas of features get integrated. In a meeting to discuss testing some of this in end to end manner where end to end is larger, we agreed the need to set up a bunch of environments.

"Our team sets up 16 windows computers in the start state" seemed like an assignment we could agree on.

Two days later, I check on progress. I personally installed 3 computers on top of what we agreed to be what my team would do, and was ready to move on to using the computers as part of the scenario. The response I get is excited confirmation of having the rest of them available too.

The scenario we go through has a portal view into the computers installed, and checking if the numbers and details add up, I quickly learn that they don't. Ones I set up are fine. All others are not fine. We identify the problem ("I forgot a step in preparing the environment" and "It did not occur to me that I would need to verify on system level that they are fine") and agree on correcting them.

Two days later, I check again. It has not been corrected. So I ask where we are, to hear that we are ready, which we are not. Containing the mild steam coming out of my ears, I ask if they checked the list in which they could see things are fine from a system perspective and I get explanations ("I don't have access", "I did not know I have access").

Another day passes by and I realize there's a holiday season coming up, so I check again. They are not fine, but "they should be". I ask for a list of the computers, to learn there isn't one. And I spend a few days tracking the relationship of the IP (given by DHCP, changes over time) as the only info given, matching them to image names and actual statuses of getting things to work.

The assignment was made in intent. No clarifying questions were asked. Given solutions, instructions were being dismissed. Learning did not show up in the process with repeating patterns. And finally, there was no consideration for the handoff that was evident for the planned vacation.

This is the different between a trainee and a senior. Seniors see things and track things others don't.

Today I'm enjoying the results of this prep, finding some really cool bugs having guessed some places where it would be likely to break. Having these issues now and having them soon vanish, knowing that my mention of them here is all I have to show is deeply satisfying.


Wednesday, February 21, 2018

Conferences as a change tool

European Testing Conference 2018 is now behind us, except for the closing activities. And it was the best of three we’ve done so far. We closed the 2018 edition saying thank you and guiding people forward. Forward in this case was a call for action to look into TestBash Netherlands, which is in just two months in Utrecht. I will personally attend as a speaker, and having been to various TestBashes, I’m excited about the opportunity to share and learn with fellow test enthusiasts. 

This promotion of other conferences is yet another thing where we are different. We don’t promote the other conferences because they ask us to. We don’t promote them because they pay us to. We promote them because we’ve learned something before we started organizing our own conference: we all do better when we all do better. 

In TestBash New York, Helena Jeret-Mae delivered a brilliant talk about career growth, with one powerful and sticky message: in her career, while she stayed in the office and focused on excellence at work, nothing special happened. But when she went for conferences, met people and networked, things started happening. She summed it up as “Nothing happens when nothing happens”. There’s side effects to growing yourself in conferences, learning and networking, that create a network impact of making a change relevant in advancing your personal career. This resonated. 

At European Testing Conference 2018, there was a group of people in different roles in a company. There was the manager, and there was the person the manager would manage. Telling the person to do something differently had not resulted in a change. Sending the person to a place where people enthusiastically talked about doing the thing differently made the person come to manager with a great idea: there’s this thing I’m not doing now, I want to do more of it. Ownership shifted. A change started. The threshold of thinking “all cool kids are doing this” was exceeded. Power of the crowds made a difference. 

While we would love to see you at European Testing Conference 2019, the software industry is growing at a pace where we are realistically seeing that the need of awesome testing and programming education (tech excellence) is needed. We need to grow as professional ourselves, but also make sure our colleagues get to grow. We all do better when we all do better. We suggest you find a local meetup, learn and network. Go to any conferences, go to great conferences. Go and be inspired. The talks can give you nudges with ideas, the skills you acquire by practice. Sample over time, always look for new ideas. 

The short list of conferences I pay attention to mentioning are ones I recognize for being inclusive and welcoming, and treating the speakers right (paying their expenses and including new voices amongst seasoned ones). I want to share my love and appreciation for TestBashes (all of them, they are all around), Agile Testing Days (in USA and Germany), CAST (USA, since latest editions)  and Nordic Testing Days. The last one is a fairly recent addition to my list now that they’ve grown into a solid success that can treat the speakers right. 

I enjoy most of the conferences I’ve been to, and would recommend you to go to any of them. I have a list of my speaking engagements in http://maaretp.com and the places I’ve experienced is growing. 


What’s the conference you will be at this year? Make sure there is one. Nothing happens when nothing happens. Make things happen for you. 

Introducing intentional vs. accidental bugs

There was an observation I made on the psychology of how people function in a previous job that has kept my mind occupied over time.

When I joined, our software was infested with bugs, but the users of the product were generally satisfied and happy. It appeared as if the existence of bugs wasn’t an issue externally. It was an issue internally - we could not make any sense of schedules for new features, because the developers got continuously interrupted with firefighting for a bug fix. 

Over time working together, we changed the narrative. The number of bugs went down. The developers could focus better on new features. But the customer experience with our product suffered. They were not as happy with us as they had been. And this was puzzling. 

Looking into the dynamic, we grew to understand that for the product, there was a product and a service component. And while the product was great, the service was awesome. And there was less of the service when there was no “natural” flow of customer contacts. If a customer called in to report a problem and we delivered a fix in 30 minutes, they were just happy. Much happier than without the need they had for our service. 

This past experience came about as we were organizing European Testing Conference 2018. Simon Peter Schrijver (@simonsaysnomore) was awesome as a local contact, focusing on a good experience for the venue in making things clear and planned in advance. As a result, things flowed more smoothly than ever before. There were “changes” as we went on setting up sessions on when we’re reorganize rooms, and those changes required the conference venue to accommodate some unscheduled work. While we felt we can do this ourselves, this venue had a superb standard of service (highly recommending the Amsterdam Arena conference center) and would not leave us without their support. 

Interestingly, some of us organizers felt less needed and necessary when there was less of firefighting, bringing back the memories of the service component from the past. Would there be a way of knowing if people were happier with our past years quick problems resolution (we were on it, so promptly) or this years feel of everything just flowing? Whose perception of quality mattered? Interestingly, in retrospect we identified one problem that we had both earlier years and this year. Earlier years we fixed it on the fly. This year we did not fix it, even we should have had equal (if not better) financial resources to act on it. I personally experienced the problem with the microphones, and failed to realize that I had the power to fix it. I can speculate on the feeling of “executing a plan” vs. “emergent excellence”, but I can’t know the real causes to the effects. 

This brings me to the interesting question of introducing intentional vs. accidental bugs. If the problems while they exist make things better as long as we can react on them quickly, would moving from accidental to intentional be a good move? Here the idea of opportunity cost comes to play: are there any other, less risky ways to focus on the pull of service, than creating the need of the service with bugs? 

With the software product, we needed to invest more in sales and customer contacts to compensate for the natural pull of the bugs for the customer to be in contact and nurture our mutual relationship. Meeting people on content can happen ore in a conference with less issues to resolve. Did we take full advantage of the new opportunity? Not this time. But we can learn for future. 

Friday, February 16, 2018

Things the Frog Did Not Notice Before It Was Late


Looking at my quarter of a decade in the software testing industry, I can look back at what I am now and what I've been earlier, to realize I've gone through some major learning experiences. Having gone through those learnings, they all now seem evident and obvious. But I've made myself a favor over the years, clarifying my stances in writing, allowing myself to see how my views change. At first I worried about sharing anything I thought was true because none of it might be, but writing more helped me deal with the concern and just embrace the positive. 

Many of the foundational changes are things I did not see coming before, they sunk in slowly over longer periods of time. They are foundational in hindsight, and could easily be things I thought I always knew. Here's four that I think are ones that caused my whole belief system to pivot to find new possibilities. 

1. Test Cases aren't What Good Testing is About

Early in my career, I taught testing at university. The course had 120 students a year while it was part of a major, and I reached a substantial amount of local young minds. Part of the course was four-phase hands-on lab where the students would write a test plan, test cases, execute test cases and report their testing as well as automate a subset of their tests. 

A few years passed, and some of my students taught me my first foundational lesson. We met in a real world testing project, where I guided them into exploratory testing and cautioned against premature documentation of test cases at times when we know the least, and opportunity cost of the documentation work in relation to time we could spend actually testing the products. My students reminded me of my teachings early on with words I've come to cherish: "Great to hear you're doing this in a smart way, not the way you taught us it must be done at the university course". 

I used to believe test cases were what good testing is about. But good testing is about finding relevant information with a limited budget, with consideration of best for today and for the future. Test cases have little to do with best for either time. 

2. Continuous Integration and Delivery Wins Over Change Management

I grew up with testing in mostly waterfall projects. By the time we were testing, many moons had passed without us seeing the software, hearing only rumors through requirements documents. We tested in phases, with a huge scope and a fixed schedule. And when it finally reached us, we needed to be careful with change management. Every fix would take us back to testing again, and we only wanted fixes that were absolutely mandatory. And we did not want them to come to us whenever they were available, after all we were approving a build, not making sure that the end system was the best possible for the users by enabling change. Because change was a risk, that usually realized as something even worse.

I remember the person who first tried to talk me into the idea that continuous integration was a good thing, and how fiercely I resisted. Later, living through continuous integration and delivery, I can't imagine wanting to control change in the way we used to. Small changes with small impacts, and lots of them over time makes life much simpler. 

3. Test Managers Can Make Testing Worse

With a some years into testing, someone decided I could lead testing efforts and made me a test manager. I created strategies and plans, discussed with the testers I was working with on how we'd follow through those plans and coached people into being better testers. I sat through meetings, building a great holistic picture of what was expected of us. And I tested some, usually something less time critical because my attention could be taken elsewhere. Then agile hit us. External managers trying to manage small self-organized teams did not make so much sense anymore.

I stepped down from my "career path" and became a tester again. The testers I used to manage became better because they did not leave "my work" for me to do, but found better ways of doing it all. They became better testers when part of their work wasn't expected from "manager". When I knew things others didn't, I could contribute just as much if not more as a colleague. 

4. Test Automation is a Core Part of Testing

I've spent years honing the thinking part of testing. I've learned to work with software and hear what it tells me, combining all sorts of information while using the software to test as my external imagination. The thinking part and the manual execution part supported each other, and automation in testing was something that helped me reach things I wouldn't be able to do manually. But a lot of the automation was throwaway code. I had colleagues with a different focus in testing, creating test automation scripts that could run reliably over time, detecting unwanted changes. And I thought of those as separate things - the artifact creation and the performance.

Then I learned to do the part I used to look at others doing, and it changed the way I looked at it. It made me realize my previous company would have been better off investing three years into me if they got both the great exploratory testing results and a piece of automation that documents, in an executable format, some of my lessons that the team could use to hold their stuff together. 

I realized the only reason for me to hire someone who does not do both exploratory testing and test automation and intertwine them is that people have not yet learned the other. And we have lots of test automation specialists who are bad at testing, we even have lots of test automation specialists who are bad at coding. But they leave behind, in long term, something that could help when they are not around. Those who don't automate make their impact in the quality as we see it NOW.


There's the story of a frog not noticing when it is boiling, moving to a different purpose as food. The frog story might be a fable without a foundation in empiricism, but as fable it describes the feeling of how things change. Many of the things that changed my views are like that. I did not notice them while I was in middle of the process. But where I started and where I ended up are very different states. 

Saturday, February 10, 2018

Test Automation Legacy Code

10 years ago I left an organization, that was top-notch in exploratory testing but had no test automation. With exploratory testing alone, I helped introduce the foundation for what would become more aimed for continuous delivery, introducing continuously delivering (with a lot of manual steps) to a beta program. The technology preview concepts and ideas still are easily recognizable, a decade later, without memories of history: it's just a way things are and have been.

While I was away, test automation got a foothold. Looking at it in hindsight, I'm happy I wasn't there to mess it up: great testers favoring the thinking part of testing and speaking up a lot about it are one of the most relevant blockers useful automation has, stopping automation being born while it's still learning its place and form. Lesson learned: give room for things to grow you don't believe in, and they may grow into things you do believe in.

Now that I'm back, and I look at the test automation generated and feel joy on the accomplishment of introducing that there. I did not do it. Or maybe I did, by stepping away and leaving the battle of opinions unbalanced, for the automation side to win. But it is there, it is doing real testing and while it has many many problems, it is a cornerstone of the way we build and release products.

In the 10 years, I've changed. I've come to remember that I was 12 when I wrote my first program. I've come to appreciate internal code quality, and recognize when its lacking. I've stopped looking at testing testers do, and started to look at programming productivity to produce the right quality.  I've trained with Llewellyn Falco, a legacy code expert, and re-learned programming legacy code first, test-driven development second and always driven by hands-on work over reading about it.

This week brought me new appreciation in the role of legacy code in what I do now for our test automation system. I'm helping us clean up the mess, without removing the value so that we can add more value. I draw from lessons on legacy code, lessons on (test) product ownership, and intertwine those so that the automation we have would better serve a product line.

I look at this as lifecycle. There's someone to select (or create) the framework. There's someone to use that framework, adding tests to the best abilities they have, doing real useful work. And still, there's the time when the code running the test automation is legacy, still living and breathing, and needing attention to not block us from our future enhancement aspirations.

We're inclined towards a rewrite, while refactor is a better option. When the existing structure emerges from the mess of duplicated details, changing pieces becomes timely. Mending the systems, not making them.


Friday, February 9, 2018

The War of Ownership


Agency. It's the fancy new word introduced to coin an insight in a war I don't want to be fighting. The idea that it's referred to as war is the first hint that what this says has little to do with the collaborative non-violent software development we aspire to.

This is a war to say that in the kinder, collaborative way, testers don't feel safe to believe their existence is founded. They're struggling for their life as they know it. I don't feel like I want to join that war, I want the war to stop. And the way to stop war isn't my specialty, but I suspect it has something to do with finding options and making working agreements. And when the party in war isn't willing to make any agreements, the stronger wins. Newsflash: tester profession isn't winning in this war. It is taking steps further into alienating itself from the tables where decisions of future of software are made.

I'm selecting a few points on twitter to emphasize my takes, words by Michael Bolton.
"Testing doesn't make your code better. Testing doesn't make your code testable either. YOU make your code better, and YOU make it more testable, and those are fine things.
Testing isn't an abstract thing that happens. There's someone who does it. And there's two clear choices of who that someone might be in the jobs we have in the industry. It could be a tester. It could be a programmer.

The programmer is YOU in the clipping above. The programmer makes the code better., the programmer makes it more testable. And the programmer tests. The programmer makes makes the code better. And in my experience, programmers who don't test rarely create good software.

There's the other option of why it might be. The tester. It is in the tester's interest to create a clear line and separation. But the trend is to remove that clear line. Many organizations report great results blurring the lines. They aren't making everyone the same, but they are stopping man-made absolutes of lines between who does and what. They are saying everyone does what their skills allow them. Everyone learns. And that everyone learns also about testing and ways to build software that makes users awesome.
"...make explicit a central theme of our Rapid Software Testing classes and consulting work: agency. We want to help empower people; shine light on what they do; help to liberate them."
What I read in this statement is that people = testers. Testers that fit the Rapid Software Testing methodology requirements. I've been told in the past I'm not a tester (as per the RST terms at least). Yet, that is exactly the position I hold, and have been holding, hands on working with products for the last 25 years.

Empowering people by creating a clear distinction on roles that are job functions and don't need as much clear distinction in the world of collaboration would include allowing them to see world their way, and mediating. But that is not the world RST seems to serve. It serves the world that I don't see as a practitioner in the companies I work with.

I recognize I'm selective. I choose product companies. I choose agile methodologies. I choose ones that believe in empowering and listening to all their experts.
"I don't think there's enough salt"
I can notice lack of salt, and add it. I don't remove myself as an actor when the salt needs adding if (and when) I know what is the appropriate amount. I don't need to remove myself as an actor on fixing as someone hired as a tester.

When we talk of the concerns about limited time and choices that we make on splitting to roles to ensure different concerns get covered, we are using concepts from time before continuous delivery. The world has changed. Quoting Necromancer from memory: "The future is already here. It's just not evenly distributed.". We don't need and can't have one true way anymore.

Software development is a process of transforming ideas into code. Which of the ideas are labeled what isn't as relevant as we think they are for reasons other than having the profession we love. What could be the ways to add meaning to this conversation that is stuck on violence and war? Isn't there a more constructive way to build a profession that draw the lines around testing for the purpose of understanding the tester?


Note added later: I did not need to read JBs article to pick up words from the title. This is not a response to his article. This is a response to the tone MB runs on twitter. 

Thursday, February 8, 2018

It's just semantics

I work with product development, building and testing a product. The product is a Software-as-a-Service type product extending beyond the idea of renting an app from the cloud. Some parts of the product change as much as 20 times a day introducing new functionality to provide the service the product provides. When I test, I don't test only the software components, but the whole customer journey and experience dealing with our product. And with some millions of customers, long-term commitment with them, striving for better for them is a fun area to work with. There's no projects. There's the product that lives on. 

So I wrote a piece of my mind talking about test automation as a product. It too has users, long-term commitment with them and is intertwined in appropriate ways with the way we develop. And an ex-colleague decides to comment on twitter:


My first reaction is to to say "it's just semantics" - "wordplay". Semantics is meaning of words, and surely meaning of words matter? In this case, I don't care of the difference between "product" and "ecosystem". I don't care for the focus in a single word, when I've just used many to explain a lot more than just that word. 

To say "it's just semantics" is to say that in this conversation, I'm done. The way you approach the discussion with me just turned sour, and I'm  not committed in continuing. You're derailing me. 

I read a wonderful book called Crucial Conversations, that talks about these types of dynamics in conversations that matter. And conversations around the nature of testing matter a lot to us testers. The book introduces the ideas of two ways of closing the flow of meaning to a pool: violence and silence. Correcting words is a form of violence. My default reaction is silence, keeping the violence option of "it's just semantics" hidden in the back of my mind. 

As we would want to add meaning to the pool when discussing, closing communication isn't a good thing. We can choose to stop and ponder on our reactions, and work back towards a place of trust. We can learn more, add more meaning to the pool, if we just keep at it.

I know Valera as an ex-colleague I have utmost respect for, and explaining myself other possible meanings of his corrective statement isn't hard. He means well, just playing on my triggers. I've needed the same reminder for myself on good intentions a lot with men who explain things to me, without me knowing them or them knowing me. 




Counting test cases

Confession: I count test cases. Before you get all riled up, read further. I count test cases for the purpose of understanding how much of something there is. A typical example for my counting is "30 test cases in our test automation" for understanding how many conceptual program pieces there are or "100 lines of functionality added, yet number of unit tests stays the same". Counting things is useful, but it is not all there is.

On the other hand, it's been at least 8 years since I last counted test cases as in understanding how many there are in a manual test set, how many of them have been passed and how many failed, and how many yet to be discovered for our list through exploring. To be more precise, it's been 8  years since anyone managed to coerce me to write down a test case, or guide anyone close to me to write them down. Instead, I write test cases into automation and free up majority of my time in freeform exploratory testing. 

It is also 6 years since I last did session based test management, counting sessions or time in functional areas as a means of progress. And even then, I did it for two weeks to prove a point: I was worth trusting to do good testing without paying extra in time to impose a visibility framework of this sort.

These became irrelevant to me, as I helped my teams move to continuous delivery. When we manage scope of hours or days instead of weeks or months, the numbers no longer matter. Quality of testing we do matters. And we learn about that as we deliver continuously, carefully tuning so that our customers could forget we ever updated their software. 

I started this post with an idea of examining my views on counting test cases, if I was asked to do them again. With all the experience I have, would I? When would I? And is there anything I would advise to those who still do?

Finding the Least Amount of Meaning

A core principle in testing is one coined by Dijkstra quite a while ago: we can't prove absence of bugs, just show the presence of those. So even if a million test cases passed, the tests that are worthwhile are the ones failing.

Twitter brings me a haha moment:
The image with texts coins the least amount of meaning in counting test cases: counting the ones passed. Counting the ones that did not find bugs. Counting only the ones that pass. Forgetting that each change for the bugs found invalidates the ones already passed introducing a new test target.

Adding More Meaning

Thinking back 8 years for the time I last counted test cases, I remember a futile battle turning into a productive negotiation. I started off with the premise that the way things had always been done - counting passes and failed tests - was a way to take us to bad testing and bad relationship with management. I was faced with the fact that with a 30-days acceptance testing project after multi-year delivery project, no one was comfortable without a way to see how testing was progressing. I couldn't go full on session-based test management. I find that it was a poor choice to replace what was in place with the amount of work needed to ramp up skills of business specialist in the methodology.

I approached the problem at hand with experimentation. Experiments are a way of asking to try something different, just this time without commitment to doing it again because it may go bad too. We started off where the organization was before me: writing test cases in advance, and following pass/fail numbers throughout the 30 days.

In the first 30-day acceptance testing, I started stretching with what I perceived as the biggest risk of using test cases as measure in a traditional way: quality of testing that gets performed. With pre-designed test cases, you create the ideas of what to test when you know the least. You have no software in your hands just the promiseware of requirements. The way test cases were created was looking at an old version of the product, imagining how the promises change that and making those scenarios that walk us through to see the changes in action.

With my lead, we introduced two kinds of test cases. The first batch was just like it had always been. Details of where to go, what to look for. The second batch was different. We used HP toolset to create  template test case, an idea of reusable steps for test cases. The template test case steps were a high level of the process the system was supporting us through, no details. The actual test cases were test data: people whose data we could use to walk through the process, in different ways. We split the time available so that we first tested with the traditional type of tests for half the time available, and the other half was left for what was essentially exploratory testing.

All the bugs we found - and we did find quite a few - were found with the latter type of tests. We learned the mix was really good for us at that point of time. Jumping directly to freedom would have made people nervous. Mix of the old and the new allowed us to do great, stretching people not too far away from their current skills and comfort zones. We reported tests planned, passed, failed, and started-yet-not-finished across both types of test cases.

In the second 30-day acceptance testing I lead for a different product, we stretched further into exploratory testing. The system we were testing had a complex processing logic with one step reaching to a third party system including manual processing. We again created test data as test cases and template test cases as reusable steps, and step 7 in the 12 step process was the information the 3rd party system needed to pass us. The group testing was seasoned in the business process, and had never used test cases before and this was a perfect fit for them.

Results in what testing found before going live were equally great. The test numbers showed us that a big portion of tests were in started-yet-not-finished state, and helped us encourage the 3rd party system in tracking whether our requests of info arrived on both ends.

The third 30-day acceptance testing I lead experimented in the secondary risk of using test cases as measure of progress: conveying the nature of testing as activity. In the first two efforts, I was aware of the illusion tests marked passed or failed were creating. As we found a problem, a new version of the system was introduced. When we found a critical cross-system change-introducing bug when 80% of tests were passed, the remaining 20% wasn't really enough. The idea of the metric was not only founded on guidance that lowered the quality of testing that could happen, but also encouraged lying on the coverage assuming there was no change.

We still used test case counts, but we changed our graphs and communication to a metaphor of a Progress Bar. We all know how progress bars are. Time waited for something to update and the number shown on the screen have often some connection, but it is not predictable and reliable. It's something to just say 'hold on, wait, be patient - working on something'.  With the progress bar, we introduced a 30% "invisible tests" number, showing the allocation we expected for repeating tests or introducing tests while testing. By the time we were at old 100% of tests passed, we really needed the extra 30% to run tests again for change and we avoided the old stupid ways of non-testing managers deciding that we were done when things planned were done once.

Why Would a Project Need Test Case Counts?

I'm not for test case counts. However, when I have to deal with them, I've learned the core of playing them for the goals of doing a good job testing:
  • Free the "test case". It's just any placeholder of things to do. It could be an exploratory testing charter. They don't need to be the same size. Trying to make them same size is just foolish. 
  • Communicate 'best before' idea for results. A test passed today can be not executed tomorrow. And how quickly the 'best before' date hits you, depends a lot on the organization.  
The projects need test case counts if they have no other measure of progress and are not ready to place trust in getting a spoken reliable measure of progress without a forced test case counting methodology. 

When I started looking at testing as time investment and reporting against time, things got more straightforward for me. Given a week, I can always say that on 4 days used, I have only one left. While exploring, I can explain what I've discovered in that time, and what I would use the next week on. I can do that, teams of exploratory testers can do that, but not all business specialists temporarily assigned as testers can do that. 

I know counting test cases is meaningless. I know the same test case done early on can take more time because I can't stop myself from exploring around whatever I was given. I know the same test case done again later can find a problem that was there in the first place, but I was just not in a state with my learning that enabled me to see it.   Constraining on test cases when the process is about learning makes absolutely no sense.

But I accept that sometimes I have to do things that make little sense to me, because they help others. I also know that I can experiment and offer alternatives that slowly take people towards where I am with understanding the dynamics around testing. Sometimes, asking people to trust me on my perceptions of status is enough. I've learned to be away enough to build ways of working that don't crumble on my absence. 

A great option to take people towards is more frequent deliveries. When meeting an organization that counts test cases, that is now my default change I would go about introducing. 



Wednesday, February 7, 2018

Driving test automation forward as a product

I'm in a middle of a very complicated relationship, best defined by love-hate. On some aspects of it, I just LOVE what we've done. Yet on other aspects, I HATE where we are. It feels both a little schizophrenic and balanced. And I'm talking about the test automation I work with.

I work with it by being on the sidelines. I know I can step in whenever I feel like it, but no one requires me to. I can look at it both as an insider and an outsider. My place and position is unique. I find that I see things others don't pay attention, and my attention brings out things others wouldn't be paying attention otherwise. And I share about this position for you, my dear reader, because there's something you could consider here:

  • if you are deep into automation, what a step back can give you as perspective
  • if you are not deep into automation, what you can make sense of just by seeing concepts and reading code "as if it was English"
I'm working out my relationship with test automation because I'm no longer ok with test automation doing a bad job at testing or myself being a blocker for others by focusing on what it cannot do over what it can do. 

There's things that I love, and where other people's appreciation helps me appreciate things more. 
  • Our ability to run automation that kicks off 14 000 clean OS instances up and down a day is quite an achievement, and that from "I want a clean OS to install on" to "I can start installing", it is a matter of a few seconds. 
  • When a new person joins and isn't left to discover the environment on their own, it takes a day to get started. Comparing this to new person joining discovering it on their own being weeks, basic proficiency being closer to 6 months I'm even a keener fan of pairing new hires for their first tasks. 
  • It runs and it is kept running. It enables releasing in a way products of this complexity could not be released without it.

    There's things that I hate, that others seem to hate much less.
    • It guides new hires to create a corner of their own over sharing common assets
    • It has tons of embedded decisions over time that allows others to be judgmental about "not doing things right" for later hires
    • Reuse of things has a manual coding element, taking days of coding to just introduce a concept like "same tests to another environment".  And people rather spend the days on the manual task than create an abstraction. 
    • People think of it as "testing a lot" because it runs often even if for a very limited set of things to test. It distorts *managers* concepts of how well we've tested, when same thing 1000 times is not 1000 times more testing for real.
    So when I said I will reframe myself as an architect, I find I reframe myself first as test automation architect. I choose to work on things that drive the overall structures for the better. And just expressing things I would like to see us work on brings me to an interesting place of shining a light on things that have been that way. 

    Since I don't still end up dwelling in the code and implementation details all my days, I see concepts. I see that there's tests that are small (that I want more of) and tests that are large (that I want less of) - and I see that the structure does not help me see them. I see tests, test specific methods and common methods, that again the structure does not help me see them. I see products, applications and components, and that again the structure does not help me see them. I see similar use of resources, like having malware samples, temporary data and persistent data, and I see that the use of those isn't consistent.

    I'm in a place where I have the vision of where we might head to for good or better, with limited ability to implement it all by myself. I might be paralyzed by my abilities alone, but others with different abilities may be paralyzed by not seeing things I see, or requiring things I require. In the last three years, I've acquired a superpower that allows me to still do much about this: pairing and mobbing.  That superpower, in addition to making it possible to turn my great ideas into code, gives us all a chance of learning together. And I'm looking forward to it.

    Test automation is a product that tests our other products. Caring for overall quality of it is just as necessary as caring for the details of each test. 

    Tuesday, February 6, 2018

    Security, Testing and My Place in All of This

    Where I work, we have this constant struggle of us (the good guys - cyber security specialists) and them (the bad guys - in their various forms). The struggle is kind of fascinating to look at from a tester perspective.

    We know software always has bugs. The good ones of us are good at catching them, and catching some of the relevant ones before we ever deliver our software out. And we catch some of them after release, listing carefully to signals and confirming suspicions. As a tester, I've come to the ultimate conclusion that what matters is our ability to change. It matters because we will miss something, regardless of our best (and improving) efforts. But it matters more because we have an adversary that plays the game from a different angle.

    The people who create malware are software developers, just like us in many ways. I find it fascinating to think if they have the same needs of investing in testing. Any software introducing the right kind of bugs can give an attacker a route into a system they shouldn't be in. Does it matter if the software using an exploit is fine-tuned and tested?

    I had a chance to talk with an old tester colleague of mine who now uses his analytical skills in working with security after something bad happens. Something bad these days could be that a company has ended up with ransomware, lots of critical data unaccessible as it has been encrypted. You could try restoring from backups, but what if your incremental backups don't sum up to full? Apparently you can also use your testing skills, testing the malware and finding bugs in it to be able to open the encrypted files again. The joy on my colleagues face on finding the bug and using it to open up all the files was the same joy we feel when we catch relevant bugs in our software. The skills used are same or similar, the target of testing is just completely different.

    Working in a security company makes me wonder about the role of testing. A lot of the bugs I routinely handle have a lot to do with lost time, lost services, inability to do what needs doing. But some bugs, the security ones, have to do with lost access and lost data.

    We're starting to see the value of security as data is crucial. I wonder if we will ever see the value of people's time, or if other solutions to free up time over delivering well functioning (tested and fixed) software will win out.

    We live in interesting times. And today I stop to appreciate that what I do for *this* software, I could do for any software. 

    Friday, February 2, 2018

    Reading with rose-colored glasses

    As a tester, I specialize in feedback. I both find things that no one else was bringing to the table, and amplify things someone else did so that the feedback gets the attention it needs. One of my favorite sayings is from Cem Kaner's book Lessons Learned in Software Testing.


    I got to think about this today, as I had a pair of people to give a piece of positive feedback on. 

    I approached the first, using a phrase of "waking up a security bear" to emphasize that something they did resulted in a positive outcome of identifying and addressing a vulnerability. The positive feedback was taken at face value. 

    I approach the second, explaining a little more context of why this was important. And while I thought I was still trying to say "well done", I got into a spiral discussion of what was wrong with what they did. Reflecting the interaction, there was *one thing wrong* - the immediate response that the bug had been previously reported and dismissed then. 

    The first one approached the feedback as something positive. The second approached the feedback as something negative. I ended up with two completely different interactions for the same message: job well done, I would like to turn up the good and this was good. 

    The whole experience took me back to a one-liner from my boss: "I want to talk with you". My immediate response was "bad or good". I chose to wear my rose-colored glasses and assume positive intent. 

    Putting these two things together, I realize that wearing the glasses of good intent is the single most relevant thing I have done to feel happier and more successful as a tester. The world is filled with good, and even the (negative) feedback we are bearers of is positive from a constructive angle.