Friday, December 30, 2022

Baselining 2022

I had a busy year at work. While working mostly with one product (two teams), I was also figuring out how to work with other teams without either taking too much to become a bottleneck or to take so little I would be of no use. 

I got to experience by biggest test environment to this date in one of the projects - a weather radar. 

I got to work with a team that was replacing and renewing, plus willing and able to move from test automation (where tests are still isolated and entered around a tester) to programmatic tests where tests are whole team asset indistinguishable from other code. We ended up this year finalising the first customer version and to add 275 integrated tests and 991 isolated tests to best support our work - and get them to run green in blocking pipelines. The release process throughput times went from 7 hours from last commit to release to 19 minutes from last commit to release, and the fixing stage went from 27 days to 4 days. 

I became the owner of testing process for a lot of projects, and equally frustrated on the idea of so many owners of slices that coordination work is more than the value work. 

I volunteered with Tivia (ICT in Finland) as a board member for the whole year, and joined Selenium Open Source Project Leadership Committee early autumn. Formal community roles are a stretch, definitely. 

I got my fair share of positive reinforcement on doing good things for individuals salaries and career progression, learning testing and organising software development in smart ways some might consider modern agile. 

I spoke at conferences and delivered trainings, total 39 sessions out of which 5 were keynotes. I added a new country on my list of appearances, totalling now at 28 countries I have done talks in. I showed up at 7 podcasts as guest. 

I thought I did not write much into my blog, and yet this post is 42nd one of this year on this blog, and I have one #TalksTurnedArticles on dev.to and one article in IT Insider. My blog has now 840 222 hits, which is 56 655 more than a year ago. 

I celebrated my Silver Jubilee (25 years of ICT career) and started out with a group mentoring experiment of #TestingDozen. 

I spent monthly time reflecting and benchmarking with Joep Schuurkes, did regular on-demand reflections with Irja Strauss, ensemble programmed tests regularly with Alex Schladebeck and Elizabeth Zagroba and met so many serendipitous acquaintances from social media that I can't even count it. 

I said goodbye to twitter, started public note taking at mastodon and irregularly regular posting on LinkedIn. I am there (here) to learn together. 

Wednesday, December 28, 2022

A No Jira Experiment

If there is something I look back to from this year, it is doing changes that are impossible. I've been going against the dominant structures, making small changes and enabling a team where continuous change, always for the better is taking root. 

Saying it has taken root would be overpromising. Because the journey is only in the beginning. But we have done something good. We have moved to more frequent releases. We have established programmatic tests (over test automation), and we have a nice feedback cycle that captures insights of things we miss into those tests. The journey has not been easy.

When I wanted to drop scrum as the dominant agile framework of how management around us wanted things planned early this year, the conversations took some weeks up until a point of inviting trust. I made sure we were worthy of that trust, collecting metrics and helping the team hit targets. I spent time making sure the progress was visible, and that there was progress. 

Yet, I felt constrained. 

In early October this year, I scheduled a meeting to drive through yet another change I knew was right for the team. I made notes of what was said.

This would result in:

  • uncontrolled, uncoordinated work
  • slower progress, confusion with priorities and requirements
  • business case and business value forgotten 

Again inviting that trust, I got to do something unusual for the context at hand: I got to stop using Jira for planning and tracking of work. The first temporary yes was four weeks, and those were four of our clearest, most value delivering weeks. What started off as temporary, turned to *for now*. 

Calling it No Jira Experiment is a bit of an overpromise. Our product owner uses Jira on epic level tickets and creates this light illusion of visibility to our work in that. Unlike before, now with just few things he is moving around, the statuses are correctly represented. The epics have a documented acceptance criteria by the time they are accepted. Documentation is the output, not the input. 

While there is no tickets of details, the high level is better.

At the same time, we have just completed our most complicated everyone involved feature. We've created light lists of unfinished work, and are just about to delete all the evidence of the 50+ things we needed to list. Because our future is better when we see the documentation we wanted to leave behind, not the documentation that presents what emerged while the work was being discovered. The commit messages were more meaningful representing the work now done, not the work planned some weeks before. 

It was not uncontrolled or uncoordinated. It was faster, with clarity of priorities and requirements. And we did not forget business case or business value. 

It is far from perfect, or even good enough for longer term, we have a lot to work on. But it is so much better than following a ticket-centric process, faking that we know the shape of the work forcing an ineffective shape because that seems to be expected and asked for. 

Tuesday, December 6, 2022

There is such a thing as testing that is not exploratory

The team had been on a core practice of clarifying with tests for a while, and they invited an outsider to join their usual meeting routine.

Looking around, there were 8 people in a call. The one in charge of inviting the meeting shared their screen, for what was about to be their routine of test design sessions. He copied the user story they had been assigned to work on into the Jira ticket he had open, and called the group for ideas of tests. 

People started mentioning things that could be tried, and the one in charge wrote things down as the rest of the group watched. For a high level idea, he called for the detailed steps. Every idea that came about was written down and detailed. 

After the meeting, the group would split the work to automate it all. 

And two weeks later, they had all these tests passing, and a feature that just did not work.


The magic glue they miss is what I call exploratory testing. The sense of caring for results in testing by focusing on learning and recognising that the time most people create test cases like above, it is the time when they know the least.

You can add exploratory testing on top of this.

You can lighten up the details you want to write to start with to leave more room for exploratory testing, where output of your testing is the documentation. 

You can seek to optimize for learning, being aware of the time used.


The team that follows the planning pattern did not do exploratory testing. You could argue the next team using their results did exploratory testing through trying to use it, to tell them it is failing. 

Working with teams like this is real. It is embarrassing but it is real. And we don't change it by playing with words, but by making the results we expect clear and permissive to excellence. 


This story, unfortunately, was brought to you by the "tell no stories that did not happen" and "only stories from last two years allowed" rules. This is real. This is what people still make of testing, and some folks meekly follow the agreed practice.  

Monday, December 5, 2022

Jokes and Poetry from a Stochastic Parrot

Tell a joke about a programmer and a tester attempting to enter Heaven and having to confess their sins.

Two programmers were on their way to Heaven and they came to a fork in the road. One path led to Heaven, the other to Hell. The programmer who had been a tester in life said, "I'm not taking any chances. I'm going to test both paths to see which one is safe." The programmer who had been a developer in life said, "You fool! We don't have time for that. We need to take one path and trust that it leads to Heaven." The tester replied, "Trust, but verify."

Another!

A programmer and a tester were standing at the Gates of Heaven, waiting to confess their sins. The programmer went first and said, "I wrote code that I knew was buggy and released it anyway." The tester said, "I found the bugs in that code and didn't report them." The Gates of Heaven opened, and the programmer was welcomed inside. The tester was shocked and asked, "Why was he allowed in and not me?" The gatekeeper replied, "We already have plenty of bugs in Heaven. What we need are more testers."

More.

A programmer and a tester were arguing about whether it was better to be a programmer or a tester. The programmer said, "Programmers are the ones who create the software that makes the world go round. Without us, there would be no progress or innovation." The tester replied, "That may be true, but without testers, all of your buggy code would never be caught and fixed. Testers are the ones who ensure that the software works properly and is of high quality." The programmer thought for a moment and then said, "I suppose you have a point. I guess it's a good thing that programmers and testers need each other to make the world a better place."

Make it more funny.

A programmer and a tester were on their way to Heaven, and they came across a sign that said "Heaven: 10,000 miles." The programmer looked at the sign and said, "I don't believe it. They've made a mistake. Heaven can't be that far away." The tester said, "Of course it is. They've measured the distance with a ruler, and it's accurate to within plus or minus an inch." The programmer rolled his eyes and said, "That's why I always use double-precision floating-point numbers. They're much more precise." The tester replied, "Yes, but they can't measure the distance to Heaven. Only faith can do that."

Ok, just one more.  

A programmer and a tester were standing in line at the Gates of Heaven, waiting to confess their sins. The programmer went first and said, "I wrote code that was full of bugs and never tested it." The tester said, "I tested the code and found the bugs, but I never reported them." The Gates of Heaven opened, and the programmer was allowed inside. The tester was shocked and asked, "Why was he allowed in and not me?" The gatekeeper replied, "We have plenty of bugs in Heaven. What we need are more testers to help us find and fix them."

You already told that, I want a new one.

I'm sorry, I don't have access to new information at this time. I am a large language model trained by OpenAI, and my knowledge is limited to what I was trained on. I don't have the ability to browse the internet or access new information, so I can only provide responses based on what I already know. Is there something else I can help you with?

One more joke about testers and programmers, please. 

A programmer and a tester were at a restaurant, and the waiter brought them a menu. The programmer looked at the menu and said, "I don't see any bugs on this menu. This must be a high-quality restaurant." The tester replied, "That's what you think. I've already found three bugs on the menu: the prices are incorrect, the descriptions are misleading, and the options are limited. I'm not sure I want to eat here." The programmer laughed and said, "I guess that's why we need both programmers and testers. Without us, the world would be full of bugs and errors."

 Give me a poem about exploratory testing.  


Thank you, OpenAI ChatGPT.  The tools we use are changing the world in ways we don't yet understand. Whose jokes and poems are these, or are these indeed "AI originals"? 

On more serious note, you should see what it does with advent of code day 5 on day 5. A separate leaderboard for AI and humans does not quite seem like the solution.