- Team size. How many people delivering the work I was analyzing.
- Monthly Active Devices (MAD). The first number I was curious on is how many customers the team was serving with the same people. Being a DevOps team meant that the team did development and deployment of new changes, but also provided support for a ever growing customer base we calculated in impressively large numbers. Telemetry was invaluable source for this information. It was not a number of money coming in. It was people using the product successfully.
- Work done represented in Jira tickets. I was trying hard to use Jira only as an inbox of work coming from elsewhere outside the immediate team, and for most part I succeeded with that and messed up all my changes of showing all work in Jira ticket numbers (I consider this a success!). About a third of visible ticket work done was maintenance, responding to real customer issues and queries. Two thirds were internally sourced.
- Work coordinated represented in Jira tickets. Other teams were much stricter in not accepting work from hallway conversations, and we often found ourselves in a role of caring for the work that others in the overall ecosystem should do. Funny enough, numbers showed that for every 2 tickets we had worked on ourselves, we had created 3 for other teams. The number showed our growing role in ensuring other teams understood what hopes were directed towards them. It was also fascinating to realize that 70% of the work we had identified for others was done within the same year, indicating that it wasn't just empty passing of ideas but a major driving force effort.
- Code changes. With the idea that for a DevOps team nothing changes if the code (including configurations) changes, I looked around for numbers of code going into the product. I counted how many people contributed to the codebases and noted it was growing, and I counted how many separate codebases there were and that that too was growing. I counted number of changes to product, and saw it double year over year. I noted that for 4 changes to the product, we had 3 changes to system level test automation. I noted code sharing had increased. Year over year numbers were delight: from 16% to 41% (people committing to over N components) and from 22% to 43% (more than M people committing on them) on the two perspectives of sharing I sampled. I checked my team was quarter of the people working on the product line, and yet we had contributed 44% of changes. I compared changes to Jira tickets to learn that for each Jira ticket, we had 6 changes in. Better use the time on changing code than managing Jira, I would say.
- Releases. I counted releases, and combinations included in releases. If I wanted to show a smaller number, I just counted how many times we completed the process: 9 - number that is published with the NEXTA article we wrote on our test automation experience.
- Features pending on next team. I counted that while we had 16 of them a year before, we had none with the new process of taking full benefit of all code being changeable - including that owned by other teams. Writing code over writing tickets for anything of priority to our customer segment.
- Features delivered. I reverse engineered out the features from the ticket and change numbers, and got to yet another (smaller) number.
- Daily tests run. I counted how many tests we had now running on a daily basis. Again information that is published - 200 000.
This blog is about thinking of things past, present and future in testing. As much as I'd like to see clearly, my crystal ball is quite dim. Learning is essential and this is my tool for that. A sister blog in Finnish: http://testauskirja.blogspot.com
Tuesday, December 29, 2020
I like numbers
Thursday, December 17, 2020
The box with Christmas Ornaments
There is a fascinating way of coming to the idea that the problem is almost always testing. Here's a little story of something that has happened to me many times in many organizations, and was recently inspired to think about. Maybe it is because it is almost Christmas. :)
Speaking in metaphors, the box with Christmas Ornaments inside.
Once upon a time, there was a product owner who ordered a Box with Christmas Ornaments. As product owners go, they diligently logged into Jira their Epic describing acceptance criteria clearly outlining what the Box with Christmas Ornaments would look like delivered.
The Developers and the Testers got busy with their respective work. Testers carefully reviewed the acceptance criteria that was co-created, and outlined their details of how testing would happen. Developers outlined the work they need to do, split the work to pieces, and brilliantly communicated to testers which pieces were made available at each time. Testers cared and pinged on progress, but when things aren't complete, they are not complete.
The test environment for the delivery was a large table. As pieces were ready from the Developers, their CI system delivered an updated version into the middle of a table. The Box with Ornaments was first a pile of cardboard, and everyone could see it was not there yet. But as work progressed, the cardboard turned into a Box, without the Ornaments. As per status, pieces were delivered (and tested), but clear parts of the overall delivery were still undone.
Asking the status and wanting to be positive, Developers would report on each piece completed, and the Box on the table looked like it was there. It was there quite some time. Asking status from testers on testing, they would learn that testing was incomplete, and it was so easy to forget that there are scenarios that required both the Box and the Ornaments to make sense of the final item, even if we could and had tested to learn about each individually.
The product owner, equipped with their Epic in Jira looking towards the table concluded:
Things get stuck in the process. They are long in an intermediate stage. It feels like they don't care about delivering me my package, they just leave it lying around for testing.
It's not like they ordered the Box without Ornaments. Yet they feel it looks ready enough that putting the Ornaments in is extra wait time.
To achieve flow of ready to the hands of whoever is expecting, optimizing developer time between multiple deliveries really does the negative trick. Yet we still, in so many cases, consider this to be a problem with testing.
I know how to fix it. Let's deliver it as soon as developer says so. No more Testers in the place you imagine them - between implementation and you having that feature at your hands.
A better fix is to deliver the empty box all the way to the customer as it is ready, and carefully think if the thing they really wanted was the Ornaments, and if another order of delivery would have made more sense.
Tuesday, December 8, 2020
RED green refactor and system test automation
In companies, I mostly see two patterns with regards to red in test automation radiators:
- Fear of Red. We do whatever we can, including being afraid of change to avoid red. Red does not belong here. Red means failure. Red means I did not test before my changes can be seen by others.
- Ignorance of Red. We analyze red, and let it hang around a little too long, without sufficient care on the idea that one known red hides an unknown red.
Tuesday, November 24, 2020
Orchestrating System Test Automation for Visual Thinkers
Working on a sizable system, feature after feature I found us struggling with timeliness of completing testing. Each feature kickoff, testerkind showed up to listen, and start learning. While development was ongoing, more learning through exploring took place. And in the end, after feature was completed, tests were documented in test automation, and just in case time taken for some more final exploring.
It seemed like a solid recipe, but what I aspired for is finding a way where testing could walk more in sync with the development. The learning time was either half-attention, or elsewhere occupied, and it resembled a mini-waterfall for each feature.
I formulated an experiment, with the intent of learning if being active with test automation implementation from the start would enable us to finish together. And so far, it is looking much better than before.
The way I approached test automation implementation was something I had not done before. It felt like a thing to try, to move focus from testerkind listening and learning to actively contributing and designing. In preparation for the feature kickoff, I draw an image of points of control and visibility, tailored to the specific feature.
- ssh to a command line in a remote computer is a touch point of both control and visibility
- reading a file in filesystem is a touchpoint of visibility
- reading a system log is a touch point of visibility
- calling a REST API and verifying responses is a touch point of both visibility and control
- clicking a button on the UI and entering values is a touch point of control
Having agreed the touch points, we could do test automation a touch point at a time. Our understanding of what touch points that were not about to change we could already work on, and what touch points were depending on the changes. We could talk about how order of development enabled testing. And most importantly, we could talk about which touch points I identified did not make sense for the system scale, because we could address risks on unit and component tests.
The current popular thinking in the testing community is to paraphrase testability into something wider than visibility and control. This little exercise reminded me that I can drive a better collaboration test automation first, without losing any of exploratory testing aspects - quite contrary. This seemed to turn the "exploring to learn randomly while waiting" into a little more purposeful activity, where learning was still taking place.
In a few weeks, I will know if the original aspiration of starting together actively to finish together will see a positive indication all the way to the end. But it looks good enough to justify sharing what I tried.
Monday, November 23, 2020
Stop paying users, start paying testers
At work when I find a bug, I'm lucky. I follow through during my work hours, help with fixing it, address side effects, all the works. That's work.
When I'm off work, I use software. I guess it is hard not to these days. And a lot of the software recently makes my life miserable in the sense that I'm already busy doing interesting things in life, and yet it has the audacity of blocking me from my good intentions of minding my business.
Last Friday, I was enjoying the afterglow of of winning a major award, bringing people to my profile and my book, only to learn that my books had vanished from LeanPub. One the very day I was more likely to reach new audiences, LeanPub had taken them down!
After a full excruciating day of thinking what was it that I did wrong to have my account suspended for authorship, LeanPub in Canada woke up to tell me that I had, unfortunately, run into a "rare bug". Next day I had my books back, and a more detailed explanation of the conditions of the bug.
If I felt like wasting more of my time, I guess I could go about trying to make a case for financial losses.
- It took time of my busy day to figure out how to report the bug (not making it easy...)
- It caused significant emotional distress with the history of one book taken down in a dispute the claimant was not willing to take to court
- It most likely resulted in lost sales
- Someone claimed I was "testing in production" because through my profession, I couldn't be a user.
- Someone claimed I was "testing without consent" because I wasn't part of a bug bounty in finding this
- Someone claimed that I was breaking the law using the software with a vulnerability hitting the bug
- Someone claimed I was blackmailing Foodora on the bug I had already reported, for free, expressing to them I was not doing this for money in our communication
- Someone claimed I was criminally getting financial benefit of the bug
- Someone claimed the company could sue me, their user, for libel in telling they had a bug
- Someone claimed I was upset they did not pay me more, not on the fact they didn't pay a competent tester in the first place (I know how to get to the bug, I would have found it working for them - exploratory testers couldn't avoid it, automation only strategy or test cases could)
- Someone claimed I was eating on the company's expense, when I was reporting on 200 euros of losses (for food they had thrown away)
Asking Your Users Perception
A colleague was working on a new type of application, one that did not exist before. The team scratched together a pretty but quick prototype, a true MVP (minimum viable product) and started testing that on real users.
Every user they gave the app to, gave the same feedback on a detail they did not like after first experience of use.
Fixing the thing would take a while, but since the feedback was so unanimous, the team got to work. A week later, they delivered a new version.
Every user that had the app before, hated that there was a change, they had grown to like the way it was.
Every user they added for first time experience, hated how it was different from the usual way things are done. For the first day, until they learn.
I shared this story to my team at work this morning, as my team was wondering if we should immediately put back the "in review" state to our work board in Jira. The developers were so used to calling "in review" their done, but also refusing to call it their done and moving it to the done column. They used to leave tickets to this in review column, and no one other than themselves was finding any value on it.
We agreed to give the change two weeks before going back to reflect on it with the idea that we might want to put it back.
Users of things will feel differently when they are still adjusting to change, and when they have adjusted to change. Your needs of designing things may be to ease getting started, or ease the continued use. Designing is complicated. Pay attention.
Sunday, November 22, 2020
The One Thing That Turns Your Testing To Exploratory
I worked with a product owner, who liked seeing all of us in the team put down very clear tasks we are doing, with estimates of how long does it take to complete it. Testing never fit that idea too well.
Some tasks we do, until we are done with them.
Some tasks we do, until we've used time we're willing to invest.
Some tasks, through learning become very different than what they originally were and writing it down may get you doing the wrong (planned) task, instead of the right (emergent) task.
Test cases and manager saying things like "I expect to see 20 test cases completed every work day", or "add one case to automation every day" give testing the feel that instead of learning and exploring, we're squeezed on time.
Thinking to the explorers of territories, finding something new and interesting was probably not likely if the orders were to take the main road, stick to it and make it to destination in an optimized schedule.
The one thing that turns your testing to exploratory testing is time. Take time, and think about using time time in a smart way. You won't know all things in advance. But without time, you wouldn't be able to be open to learn.
This insight comes from last Friday and me trying to get to a conference stage. I left on time, and turns on the way lead me to a completely different talk than I intended to give - the one I intended to give would not have worked out for that stage.
Things that can only happen when isolating...
— Maaret Pyhäjärvi (@maaretp) November 20, 2020
I was about to leave for downtown to record a talk for #MimmitKoodaa. As I stepped out the door, I realized we have snow. I had no winter tires. Wait, it gets better (1/n).