Wednesday, December 26, 2018

Thank you 2018, what next?

The year is approaching its end, and I continue my tradition of cyclic reflection. Year cannot change unless I look back at what did I do, what are the big lessons I'm taking out of the experience and whether things are changing. I live my life letting myself do things I want to do, not things I plan to do. And enjoying what I do is my very public secret to getting done a thing or two.

For continuity on my reflection, I looked at what I wrote a year ago. Things look so very different with perspective and more understanding. I looked at 2017 as a difficult year feeling alone in conferences. I should see 2018 as a difficult year as it saw an end of a long term relationship, instead I see it as liberating me from the pains I was going through in 2017. Looking forward to seeing how things look yet another year into the future.

Conferences and Travel

To end last year, I decided:
"So 2018 will see less of me abroad."
That did not work out too well. Or in a way, it was clearly divided. I did a lot of travel on the first half of the year, not being able to cancel any of the existing commitments until I got to a point where I needed to clear the second half of the year seeing only one destination of travel.


In ended up spending 120 hours on planes even with the 6 conferences I cancelled for later half of 2018. I was guilt-tripping on the "first yes then no" but looking at the new speakers and keynotes my cancellation helped enable, I can only be delighted and hope these conferences want to hear from me when my family life is better sorted out.



Regardless of being off conferences, I seemed to compensate with local meetups, online webinars and paid trainings, ending up with 28 sessions delivered in 2018, totaling my public speaking now to 380 sessions since I started. I have always done these on the side of a full-time hands-on job, and find they bring great energy and balance.

There are three particular highlights to 2018 for me. The talk I did with our 16-year-old intern at Nordic Testing Days was an effort to deliver, but showcased awesome growth as a tester and turning a 16-year-old into a public speaker in a conversational 2-person talk was lovely. Finding #MimmitKoodaa (Enabling women programmers in Finland) and teaching Java with them was another highlight.
The third highlight was the invitation to teach again the Testing and Quality Assurance course at Aalto University.

Comparing Numbers
  • 10/28 sessions were abroad (2017: 21/30)
  • 110 blog posts (2017: 103)
  • 572448 total hits on my blog (2017: 490 628)
  • 4905 followers on twitter (2017: 3889)
  • 1090 readers with 300 paid for Mob Programming Guidebook (2017: 741 with 201 paid)
  • 336 readers with 73 paid for Exploratory Testing -book (2017: 145 with 17 paid)
  • 201 collaboration calls with potential speakers for European Testing Conference (2017: 120 calls)
Personal Branding

What I did in 2018 followed my energies, and it's been a little funny realizing how muddled my personal brand is. I have no control over what people know me for, and frankly I'm no longer sure I care.

I'm a development manager (not a test manager), a hands-on tester and a programmer.

I'm a speaker in topics of exploratory testing, pairing and mobbing, and agile.

I'm a mentor and a teacher. I teach exploratory testing hands-on, and I mentor people on their general career but in particular on becoming speakers.

I'm an author of this blog and now three books: Mob Programming Guidebook, Exploratory Testing and Strong-Style Pair Programming. All my books are work in progress and on LeanPub.

I'm a conference organizer and community facilitator, organizing European Testing Conference and facilitating Software Testing Finland and Tech Excellence Finland and helping run SpeakEasy as one of the four leadership team members. I show up for Women in Testing Slack group (that grew from 150 to 350 members) and take time to promote the awesomeness around me.

I'm a social justice warrior, a derogatory term I feel like owning up since it was used on me by a family member. I try to change things I can change, and not give up to be comfortable. My most common cause is #PayToSpeak - the unfairness of having to have money to pay for travel to work for conferences and turning that dynamic around to enable diverse voices.

Looking Back

I did a lot and became more comfortable being me doing my things. I learned to appreciate that people can seem good and do a lot of damage and found some of my inner strength I had given away.

Work at F-Secure is still wonderful. I became a development manager, and have spent my time since job crafting a manager job to look almost exactly like a tester job I used to have. I have some of the best colleagues, and look at how much we got done in our No Product Owner mode with delight. We enabled faster releases by more than just two people in the team, got our user numbers up to levels that feel intimidating as we started using the word "million", and found new ways of using statistics on both successes and failures to transform the ways we serve our user base.

My kids do a lot better in school now that they have me more at home. And my home is full of love, with two 10+ year-olds being their own personalities.

I made an impact on some people's speaking careers, and one person in particular made an impact on me: Kristine Corbus showed me how anyone of us can choose to raise up the others, and how I'd rather model after someone like her.









Tuesday, December 25, 2018

Don't be a backseat driver

As I'm pairing with someone, I find it really difficult to negotiate the "contract" for that pairing session. Asking for strong-style pairing (I have an idea, you take the keyboard and I tell you what to do) or traditional-style pairing (I have an idea, give me the keyboard and watch and comment) can both me appropriate, depending on who the person I'm pairing with is, and how they interact with me.  But at time we're setting the rules on how we pair unless they are given by a facilitator for the session, it is the time when I know the least of my pair, it's the time when my inherent "making space for the other" is at its strongest and I find myself easily in a place where I'm disengaged and uncomfortable. 

At a workshop few weeks back, a friend of mine ended up pairing with a stranger. They had only done pairing in workshops with me, where I introduce and enforce strong-style for the connection, but also make the rules and expectations clear. Now they were told to pair, with someone who does not pair, and the setting was far from optimal. There was a skills difference not in their favor and as they ended up watching the more experienced one, they quickly fell off the loop of what was even going on. The computer they paired on belonged to the other and they wouldn't share it because it was set up just right for work. And the only way to pair my friend was taught was strong-style that really increases newbie involvement in uneven pair. It was clear they did not enjoy it. They left half way through the three sessions. 

Learning to talk about the two styles of pairing has helped me a lot in this regard. Now I have words to start the negotiating from. So I was delighted to find two more words for pairing patterns from videos of Alex Harms delivering a talk on pairing. The words were more of anti-patterns than good styles: side by side pairing where the more experienced one sets themselves above and outside the engaged pair, doing their own thing and being available for questions and mild hovering; backseat driving where the person not in position to steer tries to do that anyway. 

I could not help but think if Alex had run into a particularly inconsiderate experience with strong-style pairing, because without setting up the relationship with consent, strong-style pairing can easily be indistinguishable for backseat driving. 

Let's stop to think about that for a moment. What does good pairing look like? It looks like doing work by two people, where both are engaged in the same work. To be engaged, you need to be there willingly. And opting in to pair isn't always willing, if you did not know what is coming up.

Thinking about the roles in a car is helpful in remembering what it could look like. 
  • Driver is always the person on the controls. No matter what anyone else says, driver has the ultimate power of taking things their way. 
  • Navigator is helping the driver. Navigators can be well versed in the big picture not paying attention to the road, or know the details of the road and help step through the route in an optimal way. In traditional pairing, navigator reviews. In strong-style pairing, navigator controls the high level choices with words. 
If you had a backseat driver in the car, that person would be like a navigator, but operating without consent. That person could be very engaged in the pairing, but their input wasn't welcomed or accepted by the driver. A backseat driver might be exactly like a strong-style navigator. The difference is in the contract, that is often implicit, and the assumed power difference.

In the workshop some weeks ago, I also ended up pairing with someone I had not paired with before. It was their computer, and they used Vim - effectively making me feel unwelcome on the keyboard. I did not leave half-way through hand quite enjoyed the session. Looking back, we ended up with strong-style pairing where I would actively suggest ideas. 

The more I pair, the less the difference of traditional / strong-style makes sense. But in starting, it meant the world to me. And in continuing long-term, I realize that strong-style also made me uncomfortable many times, pushing a power differential I did not consent for. 

Having both in the bag is good. The lesson here is that you should take a moment to negotiate the pairing contract. Especially people who have hard time connecting to the other on emotional level and hearing when words are not used, strong-style can become an act of forcing your opinions over the other just as hogging the keyboard in traditional-style would.

The difference between a backseat driver and strong-style navigator is consent and trust. The first delivers unwelcome guidance and the latter provides instructions asked for, on a level they are able to and that they find necessary.

And since mob programming relies on strong-style pairing as it's mechanism of connecting the group, imagine having whole car full of backseat drivers... That could be very uncomfortable. 


Thursday, December 13, 2018

A Pesky Bug that Exploring Would Help With

I work with a particularly great team, and even great teams make mistakes. Many other teams, great or less so, would choose to hide their mistakes. I find I wear our mistakes as a metal of honor, as in having looked at them, figured out what I could try doing differently and going into the future again an experience richer. And looking forward to a different mistake.

In the last weeks, we've dealt with a particularly pesky mistake to make from a tester point of view, because it is a failure in how we test.

As bugs go, different ones show themselves in different ways. This particular one has limited visibility to our customers, as they can only see second order symptoms. But the cost of it has been high - blocking work of multiple other teams, diverting them from their intended use to create some good valuable items for our users, and instead making them create tooling to keep their system alive as we're oversharing data towards them.

So there was a bug. A bad bug. Not a cosmetic one. But also not one visible easily for an end user.

The bug was created by one of our most valued developers.

Since it was created by someone who we've grown to rely on, other people in the team looked at the pull request feeling confident in acceptance. After all, the developer is valued and for a reason of consistency in great work. No one saw the bug.

As we were testing the system, we made few wrong judgements:

  1. We relied on the unit and system level test automation, that tests the functionality from a limited perspective. 
  2. We didn't explore around the changes because exploring from another system as user perspective requires special attention and we did not call for it. 
  3. We relied on repeating tests as we had before, and none of the tests we did before would have paid attention to the volume of information we were sending. 
  4. We had limited availability of team members, and we only see in hindsight that the changes were into a critical component. 
So we'll be looking at changes:
  • Figuring out how the pull requests could work better to identify problems or if they are more about consistency of style and structure as they've grown to be
  • Figuring out how to better integrate deep exploratory testing activities towards system functionalities (over user functionalities)
I have a few (ehh, 50) colleagues that wasted a relevant amount of time on keeping the mistake from surfacing wider while we did our remedies. 

These kinds of bugs would be the ones I'd want to find through exploring. And it would be a reasonable expectation. 

Less managing, more testing. My kind is more valuable as not a manager. The work happens hands-on. 

Wednesday, December 12, 2018

An Evening Detour to TCR

At an usually early time for me, I waved goodbyes at the office announcing I was heading to a workshop on TCR and greeted with a bit of rolling eyes and quick googling fingers. It was already established that I was more into volunteering on all sorts of learning activities. Aki Salmi hosted a workshop session on TCR with Tech Excellence meetup, which I just so happen to facilitate, so I had all the excuses I could need to get over myself and over  there.

TCR - Test && Commit || Revert - is a programming workflow  or a test-driven development flavor. Reactions have been from "well, got to try it" to "why are we confusing TDD more" to "will TDD replace TCR" and it feels like a worthwhile thing to do in a great company.

Aki introduced the thing we were about to be experimenting with. Test-Driven Development (TDD) as we've come to know it has three steps, RED - GREEN - REFACTOR, and Test && Commit || Revert (TCR) removes the RED. If your tests aren't green when run, you lose what you worked on. If they are green, they get committed as the next baseline to work from.

Other than the focus on experiencing TCR, the session was framed by 3x25 minutes of paired work with sharing impressions in between, the Lift (Elevator) kata and whatever the pair ends up choosing as their language.

I paired with one of my favorite people who I yet had never paired with before. They came in with things set up on their computer, so the choice on language and IDE were settled: Python on Vim.

Over the three 25-minute sessions, the promise of having fun was well delivered.
  • The most common reason of reverting for us was syntax - we missed a part of formatting. This made me aware of how much I prefer relying on Pycharm as my IDE, with having my focus free from the details of the syntax. We also had great little discussion on the feeling of control of having to know / do every bit in Vim I wasn't appreciating. 
  • With another IDE, I find it relieving to work with intent and generating frames for the code and the Vim as editor made me aware of how much I appreciate other tooling. 
  • Differences on Python / Java felt evident in running tests with same name that Python just dealt with for us, while we would have had a couple more of reverts if we worked in Java. 
  • One pair "cheated" by commenting out the failing tests and I'm still confused with it. Being always green if cheating is encouraged is easiest by never having much in the way of tests, and that cannot be what Kent Beck means with "Cheating is encouraged, as long as you don’t stop there."
  • With the workflow, I was missing seeing the test fail to test the test before implementing. I disliked the thinking as the computer when I would much rather see the test fail first, but wasn't willing to let the test be reverted just for that. 
  • Putting the commands together so that your changes are gone on red increased the sense of risk losing your changes and introduced a language of betting a small amount of work. 
  • Being painfully aware of the risk of losing changes keeps changes small. It would require next level abilities compared to what we were working with though that we could identify making designs driving us to smaller steps for this. 
Overall, I think I was just as bothered with "losing my IDE" as "losing the RED".

In the discussions afterwards, there was speculation about would something of this sort be more necessary when working on trunk-based development where folks might commit tests while they are red, but it sounds to me like a different problem to just the programming workflow.

I find all these things useful as ways of learning about what you're comfortable with and how constraints of all sorts impact your way of working.

All in all, this just felt like a relaxed version of Adi Bolboaca's baby steps constraint, where you revert if you're not green in 3 minutes. With this style, you can see red, but get to a similar practice - making changes intentionally small without losing the feedback of a test first failing to know you're actually testing what you intended.  

Tuesday, December 4, 2018

Testing a Modify Sprite Toolbar

I've been teaching hands-on exploratory testing on a course I called "Exploratory Testing Work Course" for quite many years. At first, I taught my courses based on slides. I would tell stories, stuff I've experienced in projects, things I consider testing folklore. A lot of how we learn testing is folklore.

The folklore we tell can be split to the core of testing - how we really approach a particular testing problem - and the things around testing - conditions making testing possible, easy or difficult as none of it exists in a vacuum. I find agile testing still talks mostly about things around testing, and the things around testing, like the fact that testing is too important to be left only for testers and that testing is a whole team responsibility, those are some great things to share and learn on. 

All too often we diminish the core of testing into test automation. Today, I want to try out describing one small piece in the core of testing from my current favorite application under test while teaching, Dark Function Editor.

Dark Function Editor is an open source tool for editing spritesheets (collections of images) and creating animations out of those spritesheets. Over time of using it as my test target, I've come to think of it as serving two main purposes:
  • Create animated gifs
  • Create spritesheets with computer readable data defining how images are shown in a game
To test the whole application,  you can easily spend a work week or few. The courses I run are 1-2 days, and we make choices of what and how we test to illustrate lessons I have in mind. 
  • Testing sympathetically to understand the main use cases
  • Intentional testing
  • Tools for documenting & test data generation
  • Labeling and naming
  • Isolating bugs and testing to understand issues deeper
  • Making notes vs. reporting bugs

Today, I had 1.5 hours at Aalto University course to do some testing with students. We tested sympathetically to understand the main use cases, and then went into an exercise of labeling and naming for better discussion of coverage. Let's look at what we tested. 

Within Dark Function Editor, there is a big (pink) canvas that can hold one or more sprites (images) for each individual frame in an animation. To edit image on that canvas, the program offers a Modify Sprite Toolbar. 


How would you test this? 

We approached the testing with Labeling and naming. I guided the students into creating a mindmap that would describe what they see and test. 

They named each functionality that can be seen on the toolbar: Delete, Rotate x2, Flip x2, Angle and Z-Order. To name the functionalities, they looked at the tooltips of some of these, in particular the green arrows. And they made notes of the first bug. 
  • The green arrows look like undo / redo, knowing how other application use similar imagery. 
They did not label and name tooltips nor the actual undo/redo that they found from a separate menu, vaguely realizing it was a functionality that belonged in this group yet was elsewhere in the application. Missing label and name, it became a thing they would have needed to intentionally rediscover later. They also missed label and name of the little x-mark in the corner that would close the toolbar, and thus would need to discover the toggle for Modify sprite -toolbar later, given they had the discipline. 

The fields where you can write drew their attention the most. They started playing with the Z-order, giving it different values for two images - someone in the group knew without googling that this would have impact on which of the images were on top. They quickly run into the usual confusion. The bigger number would mean that the image is in the background, and they noted their second bug:
  • The chosen convention of Z-order is opposite to what we're used to seeing in other applications
I guided the group to label and name every idea they tried on the field. They labeled numbers, positive and negative. As they typed in the number, they pressed enter. They missed label and name for the enter, and if they had, they would have realized that in addition to enter, they had the arrow keys and moving cursor out of focus to test. They added decimals under positive numbers, and a third category of input values of text. 

They repeated the same exercise on Angle. They quickly went for symmetry with Z-order, and remembered from earlier sympathetic testing they had seen positive value 9 in the angle work already. They were quick to call the category of positive covered, so we talked about what we had actually tested on it.

We had changed two images at once to 9 degree angle. 
We had not looked at 9 degrees in relation to any other angle, if it would appear to match our expectations. 
We had not looked at numbers of positive angles where it would be easy to see correctness. 
We had not looked at positive angles with images that would make it easy to see correctness. 
We had jumped to assuming that one positive number would represent all positive numbers, and yet we had not looked at the end result with a critical eye. 

We talked about how the label and name could help us think critically around what we wanted to call tested, and how specific we want to be on what ideas we've covered. 

As we worked through the symmetry, the group tried a decimal number. Decimal numbers were flat out rejected for the Z-order, which is what we expected here too. But instead, we found that when changing angle from value 1 to value 5.6, the angle ended up as 5 as we press enter. Changing value 4 to 4.3 showed 4.3 still after pressing enter, and would go to 4 only with moving focus away from the toolbar. We noted another bug:
  • Input validation for decimal numbers would work differently when within same vs. other digits.
As we were isolating this bug, part of the reason why it was so evident was that the computer we were testing with was connected to a projector that would amplify sounds. The error buzz sound was very easy to spot, and someone in the group realized there was asymmetry of those sounds on the angle field and the Z-order field. We investigated further and realized that the two fields, appearing very similar and side by side would deal with wrong inputs in an inconsistent manner. This bug we did not only note, but spent a significant time writing a proper report on, only to realize how hard it was. 
  • Input validation was inconsistent between two similar looking fields.
I guided the group to review the tooltips they did not label and name, and as they noticed one of the tooltips was incorrect they added the label in model, and noted a bug. 
  • Tooltip for Angle was same as for Z-order description. 
In an hour, we barely scratched the surface of this area of functionality. We concluded with discussion of what matters and who decides. If no one mentions any of the problems, most likely people will imagine there are none. Thinking back to a developer giving a statement about me exploring their application in Cucumber podcast:
She's like "I want to exploratory test your ApprovalTests" and I'm like "Yeah, go for it", cause it's all written test first and its code I'm very proud of. And she destroyed it in like an hour and a half.
You can think your code is great and your application works perfectly, unless someone teaches you otherwise.

I should know, I do this for a living. And I just learned the things I tested works 50% in production. But that, my friends, is a story for another time. 






It's not What Happens at the Keyboard

"What if we built a tool that records what you do when you test?", they asked. "We want to create tooling to help exploratory testing.", they continued. "There's already some tools that record what you do, like as an action tree, and allow you to repeat those things."

I wasn't particularly excited about the idea of recording my actions on the keyboard. I fairly regularly record my actions on the keyboard, in form of video, and some of those videos are the most useless pieces of documentation I can create. They help me backtrack what I was doing, especially when there are many things that are hard to observe at once, and watching a video is better use of my time than trying the same things again on the keyboard - not very often. Or trying to figure out a pesky condition I created and did not even realize was connected. But even on that, 25 years of testing has kind of brought me better mechanisms of reconnecting with what just happened, and I've learned to ask (even demand!) for logs that help us all when my memory fails as the users are worse at remembering than I will be.

So, what if I had that in writing. Or executable format. It's not like I am looking for record-and-playback automation, so the idea of what value those would provide must be elsewhere. Perhaps it could save me from typing details down? But from typing just the right thing - after all, I'm writing for an audience - I would need to clean up to the right thing or not mind the extra fluff there might be.

I already know from recording videos and blogging while testing, that the tool changes how I test. I become more structured, more careful, more deliberate in my actions. I'm more on a script just so that I - or anyone else - could have a chance of following later. I unfold layers I'm usually comfortable with, to make future me and my audience comfortable. And I prefer to do this after rehearsal, as I know more than I usually do when I first start learning and exploring.

A model of exploratory testing starts to form in my head, as I'm processing the idea of tooling from the collection of data of the activity. I soon realize that the stuff the computer could collect data on is my actions on the computer. But most of exploratory testing happens in my head.


The action on the computer is what my hands end up doing, and what ends up happening with the software - the things we could see and model there. It could be how a page renders to be displayed precisely as it is, so that for future, I can have an approved golden master to compare against. It could be recognizing elements, what is active. It could be the paths I take. 

It would not know my intent. It would not know the reasons of why I do what I do. And you know, sometimes I don't know that either. If you ask me why I do something, you're asking me to invent a narrative that makes sense to me but may be a result of the human need of rationalizing. But the longer I've been testing, the more I work with intentional testing (and programming), saying what I want so that I would know when I'm not doing what I wanted. With testing, I track intent because it changes uncontrollably unless I choose to control it. With programming, I track intent because if I'm not clear on what I'm implementing, chances are the computer won't be doing it either. 

As I explore with the software as my external imagination, there are many ways I can get it to talk to me. What looks like repetitive steps, could be observing different factors, in isolation and chosen combinations. What looks like repetitive steps, could be me making space in my mind to think outside the box I've placed myself in, inviting my external imagination to give me ideas. Or, what looks like repetitive steps, could be me being frustrated with the application not responding, and me just trying again. 

Observation is another thing human side of exploratory testing brings. We can have tools, like magnifier glass, to enhance our abilities to observe. But ideas of what we want to observe, and its multidimensional nature are hard to capture as data points, and even harder to capture as rules. 

Many times the way we feel, our emotion is what gives another dimension to our observations. We don't see things just with our eyes, but also with how we experience things. Feeling annoyed or frustrated is an important data point in exploratory testing. I find myself often thinking that the main tool I've developed over years comes from psychology books, helping me name emotions, pick up when they come to play and notice reasons for them. My emotions make me brave to speak about problems others dismiss. 

Finally, this is all founded on who I am today. What are my skills, habits and knowledge I build upon. We improve every day, as we learn. We know a little more (knowledge), we can do a little more (skills) and can routinely do things a little more (habits). In all of these we both learn, and unlearn. 

I don't think any of the four human side parts of exploratory testing can be seen from looking at the action data alone. There's a lot of meaning to codify before tooling in this area is helpful. 

Then again, we start somewhere. I look forward to seeing how things unfold. 



Friday, November 30, 2018

Open Source and the New World of Creator Responsibility

As I'm minding my own business, doing some testing and enjoying myself, I get brutally interrupted by an incoming message. Someone somewhere has a problem. With the product I work with. I know that I care a lot, deep down, but the timing of the message is just so inconvenient. Reluctantly I offload from the work I was doing, preparing myself mentally to dig into the intrusion.

As I read the message and related bug report, I feel the frustration in me growing. The steps to reproduce the problem are missing. The description of the problem is vague. It feel like the reporter did not even understand what they were doing, let alone knowing what  the product was doing. Reading the text again and again, I come to the conclusion they have an actual problem, but their report, all without the right details and logs does not help me to do anything with it.

The tester that I am, I start testing the situation myself. Sure enough, it is not like I have tested our product together with Google Ads management interface. As I set up the environment, I realize I have to have an actual campaign set up so that I could even reproduce the problem. I toy between the "going to ask for company credit card" and "use my own credit card, here's the opportunity I always wanted to try ads for my own stuff" and go for the easy route and build a little campaign to advertise my conference. I confirm the bug, test more to figure out exact steps to isolate it to know what is the minimal impact workaround I can suggest, collect the logs and attach all of my investigation into the records I started with.

The fix is ready in 10 minutes. The release of the fix takes a little longer.

All of this happened in a project where I get paid for my time. Yet I am frustrated. But what about when we find ourselves working on our pet projects, sharing them as open source?

First of all, the code, whether closed or open source does what its creators make it do.

This means there is always creator responsibility of what can be done with the software you created.

With software, it's not just about being a creator as in writing some lines of code, but it is also being an owner of what you started creating. You can be responsible for the code you wrote to an extent, but when you move in open source projects to the idea of being responsible for code others created on a codebase you started, we're piling a lot of work and responsibilities on someone who did not really opt into it.

There was a great example of that this week.
It wasn't *mining* it was *stealing* as many pointed out. But the really peculiar thing to look at was the formation of camps in assuming different responsibility for the owner as original contributor.

Finally posting this was motivated by a good friend of mine running a relevant open source test automation tool project tweeting this:
 Going back to my story to start this blog post, you can't really always expect that your users - that is what they are even if your project is free and open source - would take the time to isolate and report problems. And when you make it so, at worst you are like another open source maintainer who goes around bragging on quality of their code when reality is that their users just don't bother reporting but take the choice of just walking away.
  
I know how to write good bug reports yet I rarely do. I only do it when I get paid for it, or as a result of it I get something I need. I want to optimize the time I spend and a quick report on problems is the smallest possible contribution I can choose to take.

Information is valuable for a project that wishes for wider scale adoption. While there may not be direct money coming in from an free open source project, I find that many of the relevant creators have found a way of turning their thing into some income flow.

Say thank you for making you *aware* you have work you can think about doing, and stop blaming anyone who works free. I know it is hard to do, even when you have a paying customer.





Thursday, November 29, 2018

Forced Consistency Across Teams

The first thing I taught to our latest 15-year-old trainee was what we believe that rules and processes need to be built with a core principle in mind: trust.

If someone might commit a crime, we don't put everyone in jail.

Many of the corporate processes and principles are an illustration of people really not getting this idea.

We make people clock in and out of work so that we know they worked. Except in this industry, time at place of work is a ridiculous metric. I should know, I've just had two weeks of motivational issues where I was at work and performed really bad (to my standards).

We make people write test cases and tick the box as they complete them, because, quoting an old manager of mine "no one in their right mind would test if they were not monitored in this detail" Well, I did. And still do. And I love it. The devs loved it as soon as it wasn't tick-the-box.

We introduce common practices and processes for teams with very different skillsets and background, even if there was no common problem those solve.

When I look at processes and practices, I note that I have personal preferences. I prefer no estimates, no Jira, end-to-end visibility and sense of ownership, understanding and solving problems. I recognize that everyone in my team has their own personal preferences, and I respect their preferences as I expect them to respect mine. I do compromises, like spend my time suffering with Jira just because they still haven't figured out that it isn't making them better. And they experiment with whatever ideas we all interject into the efforts of trying to make things better.

What inspired me to write this is a discussion about my personal dislike for definition of done.

I believe definition of done is a great tool for building a common understanding for a team on what they try to mean when they say done. I've used it, multiple times.

I've come to think of it as "definition of done for now"-

I've learned that a deeper version of it is risk-based definition of done for now, even within one team. Cookie cutter templates rarely work for other than getting started.

I've experienced over and over again how forcing definition of done over many teams for reasons of consistency is short-sighted. First you have to understand if the teams are consistent, or if some are steps ahead of others, and approaching the same problem with a different solution could actually improve things more.

As with any practices and processes, I don't accept that it would be our only option for improvement. Using time on one thing is time away from something else - the idea of opportunity cost. If Definition of Done would help us make sense in a messy multi team setting, would any other approaches work? Could you redesign the team compositions to force the architecture you aspire that would drive down the dependencies, leveraging Conway's law? Could you instead of a Definition of Done (there are plenty of examples what this contains), describe your team's responsibilities in some other format that would enable you to see a dimension DoD misses?

Using Jira states the different way is hardly the reason why developers find it hard to start working on a new component. Looking at the code and its structures is a much more likely reason. Lack of documentation and training is a much more likely reason.

Go for consistency when it solves a problem without introducing bigger ones. Putting everyone in jail because one might rob the bank tends to be a bigger problem.



The Broken Phone Effect

I was totally upset with a colleague of mine, and to ease my heart, I ranted about them to my manager. Like that colleague just reminded me today, it is so hard to understand people like me who would just talk about problems without expecting a solution. And this rant was like that. I wasn't expecting an action. I just needed to talk.

Like so many times, I found that the metadata of what type of request was coming in from me did not go through. When my communication headers included the metadata asking for a sympathetic ear, and a mirror to bounce things off, they only received the default: When presented a problem, solution is in order.

The solution however was particularly good this time. They suggested that I'd go and talk to my colleague. I did. It felt overwhelming and difficult. But it was a start of many great conversations where we built trust on one another, now knowing that we could constructively talk about anything and everything.

So I distilled my lesson. When I had something I felt strong enough to rant to another person, perhaps taking the extra step and talking to them directly, not about what *they do* but about what *I feel*, was a path worth taking.

With this lesson in mind, I've asked many people to talk to me directly when I'm involved in how *they feel*. I believe they cannot tell me what to do, but they can help me understand how what I do makes them feel and I may choose to work on my behaviors, or at least help them understand how I make sense of the world. Remembering how hard those steps are, I appreciate that many people choose avoidance. I still choose avoidance in righteous anger when I feel neither my status nor the past agreements justify me taking the first step. The word "emotional labor" comes to mind. Bridging in disagreements requires that, and I'm tired of it being expected it is my duty to perform it for the others.

Over the years as I have been blogging, people have reached out to tell they've been through what I write about. While I write for myself, I also write for people like myself. People who aspire to change themselves, change the results they contribute, change the world in some small way. My stories are not factual representations of events, but they are personal intertwining of many experiences that allow me to shine light to a relevant experience.

When my manager calls me telling I should not say "Google me", I wish the person I offended would have had the guts to talk about this without the broken phone effect. I could have explained that I mean that you will find articles I've written, research I've done and that I did google your background enough to see that you are talking to someone who knows something of this stuff. Assume good intent. I rarely say things to insult.

If you have something to say, talk to me about it. If you don't but changed your ways for the better anyway, I'm fine with you being annoyed with me and avoiding me. Mutual loss, but we both have options.

A great option is to break the broken phone effect and just deal with your own stuff instead of sending a messenger. It might have a second order positive impact.

I tried, I failed, I succeeded. I learned. Can you say the same? FAIL is a first attempt in learning and takes a significant amount of courage.

Tuesday, November 27, 2018

Pay to Speak and Why Non-Profit Does Not Mean What You Think

Four years ago, I started a conference as an experiment in figuring out how a tech conference could be more fair to speakers. I had experienced many sleepless nights trying to figure out what were my values as a mother of two in allocating the family money to Pay to Speak at conferences, just because it was something I personally aspired for.

Looking at the way I felt about my immediate choices I had to make within my family, and the greater scheme of things in the community, I started actively looking at a theme I plugged #PayToSpeak. It was an observation that while conferences sell the possibility to come hear speakers teach what  they had learned, I found myself short of money as showing up at conferences was something where I was expected to pay my travel and accommodation. I paid, just like everyone else, building barely enough name to get to a point I could have a choice.

In the greater scheme of things, people like myself of with less privilege than what I had as a single mother of two, suffer more from #PayToSpeak. Also, where you work matters - some companies seek the visibility in your conferences and are willing to pick up the travel bill, while I personally have chosen to work in product companies that would choose to invest their visibility euros in other conferences than the ones I want to speak at.

So I created a sheet, and very recently upped it to a web site with a link to the sheet. It will move forward when I feel I have time and energy. But it serves a purpose already as it is. You can check out http://paytospeak.org to learn about the theme.

Fairly regularly, I get conference representatives asking me to present them in a more positive light. CAST chairperson Maria Kedemo asking for improvements in the documents is not unusual.
However, I have not yet found a way of presenting information like this. As you may guess from the title of my post, being a non-profit is not as obvious tick box as you might think.

The difference of a non-profit and company as organizer is hard to describe. Both can organize the same conference. Both can organize it for the same price for participant. Both can choose to use most of the money to pay salaries for the organizers, and all expenses for the speakers. Both can choose to be #PayToSpeak. The only real difference is in what they can choose to do with the profits.

Remember, profit is what is left after all the costs. Salaries are costs. So even for a company, you don't have to end up making profit.

What non-profit do with the profit they raise is that  they run their cause. The cause for AST is admirable. They use the money they make in conferences in financially supporting small testing communities that need that money to bring in speakers, pay event organizing costs, start new events. AST played a core role in financially supporting my conference on year one, saving me from some of the financial stress taking a risk of going into organizing with them, rather than all by myself.

My conference uses the profits on supporting new speakers traveling to other #PayToSpeak conferences, and enabling people who aspire to speak to experience a conference they can't pay for. With the cost structure of always paying the speakers expenses and being uncertain about number of paying participants, the profits from the conference to use on the cause have not been very large.

Instead, my conference has served as an experimentation platform. I can now say that  while speakers are important, the sales and marketing effort is more important in a conference's success. I have found new respect for people who manage to run series of conferences with volunteers only, and for conferences that pay their organizers. The choices of what work / costs are worth paying from the conference budget are not easy, and will be versatile and hard to describe.

So I choose to only describe in my sheet the immediate impact for the speaker - what money out of pocket are they expected to find or what financial support they can expect to see, should they volunteer as speakers.

I dream of a world we we'd also have the money to compensate for used time for the speakers. That means that the audience - all of us - needs to have money to pay for those services. The world is more than half-full of people for whom their companies never paid a single tech conference. They might never get to go, or if they go, it is personal time off.

Tuesday, November 20, 2018

Stop Analyzing, Start Automating

I see systems. I guess we see things we like seeing, and I like seeing how the bits and pieces connect, what is clear and what is wrapped in mystery of promises of more learning in the future. I like seeing value, and users and flows. And pieces alone are part of that flow, but the promise comes together with the system.

For years, I've tested systems. I've figured out ingenious ways of seeing what changes, learning heuristics of what changes matter, all grounded on knowing why would anyone want to use this? Every moment testing an individual piece, as an exploratory tester, connects somehow to a greater purpose in the context of the system.

When I worked with a team of 10 developers with their only tester, we were doing daily releases without test automation, and it worked great. It worked great into slowly but steadily introducing test automation. But even without test automation, in contained the size of change. Each change would flow isolated through the pipeline with the manual steps. Just like coding was manual, testing was that too. Think, test, implement, test, think, release - a steady flow of features of value.

But now, the scale is different. Where I had 10 people before, I now have 100. And 100 developers, doing non-isolated changes merging to trunk as soon as they think they're ready is change at a pace one tester, even with  the ingenious ways of seeing things and knowing things, it is just too much. This is where test automation as documentation comes in. With executable documentation, test automation frees my energy to analyze on top of it, not all of it. I no longer need to analyze details, but trends. Clusters of changes. Driving forces for those changes. Risks in the system, and risks in the people creating those systems. Automation catches some of it - quite a lot of it. And what it does not catch, is a chance of identifying what the automation is missing. To document with test automation.

I find myself in places where automation at first is more of a wishful thinking than actual net of coverage. But learning, every day, and documenting with automation, it grows every day.

My analyzing changes on backlog visualization. If I can fix and forget, I would go there. But sometimes things need bigger focus. And as an exploratory tester and a system tester, I see what we miss. I label it, and ask for it.

I wouldn't know how to connect this stuff with reality if I did not spend time, hands on, with the systems we're building. The product works as external imagination, making my requests of what should be tested more practical. And while I prepare for the automation work, I just so happen to have already tested without the automation, found some problems and gotten them fixed.

We emphasize automation, for a reason. But in addition to folks who automate, we need folks who care for identifying things that take us further, make our automation do real testing. Not end to end, but covering a web of granular feedback mechanisms, so that we know when things are not right.

Saturday, November 10, 2018

New Speakers, New Stories - Agile Testing Days USA SpeakEasy Track

Yesterday at TestBash USA, one of people I've mentored behind the scenes delivered a talk. I woke up today to a delightful message: "...had people saying I was their favorite talk. I wouldn't have reached this point without your help, I can't thank you enough."

Today, at Belgrade Testing Days, lovely people on Twitter delivered me news that another people I have mentored had a full house and got 4.93/5 in immediate app feedback for her first ever talk.

There is something in common. These people did awesome with their talks. They invited help. But what that really shows is that they have always been awesome, and inviting help was just small part of them putting effort into making their messages accessible for others. It has been my pleasure to be a small part on their journey, and get privately insights into what they are teaching - me and others.

I believe we all have worthwhile lessons to share. And we are ourselves our worst enemies talking down to ourselves. There is something you've done. There is something you care for. Your approach, when shared, could help someone else figure out their approach. It could be the same as yours. It could be completely opposite, yet inspired by you. The conference stages are for us learning together, and we need different perspectives and stories on those stages.

You - yes, YOU - have this in you. And you don't have to take that step of becoming a speaker alone. That is why there is SpeakEasy, a community initiative of building productive relationships between speakers, mentors and conferences. I believe in this so much that I've formed a leadership team with 3 lovely colleagues to take the initiative forward from 2018 on. I believe in this so much that I have mentored dozens of people, and keep my calendar open for giving time to support people on their speaking journeys.

Right now, I am volunteering with SpeakEasy that works in collaboration of one of the lovely conferences: Agile Testing Days USA. We have a full SpeakEasy track we are building to get stories to learn from that wouldn't be available otherwise. We seek for 6 talks, 2 workshops and 1 keynote. The talks are from new speakers. The workshops would be new speakers pairing with more seasoned speakers. And the keynote would be a seasoned speaker who has not had their changes of kicking into the keynoting regular circles yet.

You have one more week to join this. To join as a new speaker, you need to schedule yourself into my calendar for 15 minute discussion. We'll figure out what your early idea could look like, and consider it as something that you'll build with support from a mentor if it is the right match for a balanced program. Schedule your session now. If nothing else, you'll get a chance to talk your experiences through with me, and hear my ideas of how you could frame that for other stages.

If you get selected, Agile Testing Days USA pays the travel (with specified limits) and accommodation, and you get to enjoy the other sessions in the conference too. It's a lot of work to prep a talk, but it is also rewarding to structure your thoughts so that others are able to follow. It is a skill, I find, that makes a difference in your career.

You are awesome. And I want to talk with you. I need you to take the first step. I can't find you when you've not taken that stage yet.









Changing the Discussion around Scope

People have an amazing talent for seeking blame. Blame in themselves, what they did wrong but also blame in others, what they did wrong. Having a truly blameless retrospective where we'd honestly believe that F.A.I.L. means first attempt in learning and embracing more attempts in the future, hopefully different ones is a culture that takes a lot of effort.

I've personally chosen a strategy to work around scope that is heavily reliant on incremental delivery. Instead of asking how long it takes to deliver something, I guide people into asking if we could do something smaller first. It has lead my team into doing weekly, even daily releases where each release delivers some value without taking out that was already there. Always turning the discussion towards value in production, and smallest possible increment of that value has been helpful. It enables movement within the team. It enables reprioritization. And it enables the fact that no one needs to escalate things to find a faster route to get the same thing done, the faster route is always a default.

We work a lot with the idea of being customer-oriented - even obsessed, if that wasn't such a negative word. We are thinking a lot in terms of value, empathy and caring, and seeking ways to care more directly. We don't have a product owner but a group of smart minds both inside the team but also outside supporting the team with business intel. The work we all do is supposed to turn into value at customers hands. Production first helps us prioritize. Value in production, value to production.

We didn't always deliver this way or work this way. We built the way we work in this team in the last 2.5 years I've had the pleasure of enjoying the company of my brilliant team.

Looking at things from this perspective, I find there is a message I keep on repeating:

If you have a product owner (or product management organization) and they ask you to deliver a feature that customers are asking, they don't know everything but they do their best in understanding what that would be like. They define the scope in terms of value with the customer.

If they ask you to estimate how much work there is to do that, you need to have some idea of the scope. Odds are, your idea of the scope isn't same as theirs, and theirs is incomplete to begin with. The bigger the thing asked, the more the work unfolds as we are doing it.

They asked you for value 10. You thought it will take you effort 10. That is already two ways of defining the scope.

In delivery, you need to understand what the value expected really is. Often it is more in terms of effort that you first guessed.

Telling folks stuff like "you did not say the buttons needed to be rounded, like all the other buttons" or "the functionality is there, but the users just won't find it" may be that it works as specified but not as really expected. I find that those trying to specify and pass the spec do worse than those trying to learn, collaborate and deliver incrementally.

Scoping is a relationship, not something that is given to me. We discover features and value in collaboration, and delivering incrementally helps keep the discussion concrete. Understanding grows at every step of the way, and we should appreciate that.

** note: "Scope does not creep, understanding grows" is an insight I have learned from Jeff Patton. There are many things I know where I picked them up, while there's more where I can no longer pinpoint where the great way of describing my belief systems came from. I'm smiling wryly at the idea of mentioning the source every time I say this in office - we're counting hundreds. 

Getting best ideas to win

There's a phrase I keep repeating to myself:
Best ideas win when you care about work over credit. 
A lot of times, if you care to be attributed for the work you are doing, the strategies of getting the best ideas out there, implemented, are evading. If you don't mind other people taking credit for your ideas (and work), you make a lot more progress.

Mob programming is a positive way of caring about work over credit. There we are all mutually credited for what comes out. But on the other hand, it is hard that you know something would not be what it is without you, and the likelihood of anyone recognizing your contribution in particular is low.

At TestBash Australia, we had a hallway conversation about holding on to credit you deserve, and I shared a strategy I personally resolve into when I feel my credit is unfairly assigned elsewhere: extensive positivity of the results, owning the results back through marketing them. People remember who told them the good news.

As a manager in my team, I've now tried going out of my comfort zone on sharing praise in public. With two attempts at it, I am frustrated on feeling corrected. I'm very deliberate on what I choose to say, who I acknowledge and when. I pay a lot of attention to the dynamics of the teams, and see the people who are not seen, generally speaking. What I choose to say is intentional, but also what I choose to not say is intentional.

This time, I chose not to acknowledge great work of an individual developer when getting a component out was very clearly team work. I remember a meeting I invited together 5 weeks ago to guide scope of the release to smaller, with success of "that is ready, tomorrow". I remember facilitating the dedicated tester designing scope of testing to share that there were weeks worth of testing after that "ready". I remember how nothing worked while "ready", and the great work from the tester in identifying what needed attention, and the strong-headedness of not accepting bad explanations for real experiences. I remember another developer from the side guiding the first developer into creating analytics that would help us continue testing in production. I remember dragging 3rd parties into the discussion, and facilitating things for better understanding amongst many many stakeholders. It took a village, and the village had fun doing it. I would not thank one for the work of the village.

Just a few hours later, I was feeling joy as one of the things I did acknowledge specifically was unfolding into wider knowledge in a discussion. I had tried getting a particular type of test from where it belonged, and failed, and made space for it to be created in my team. The test developer did a brilliant job implementing it and deserved the praise. Simultaneously, I felt the twitch of lack of my praise on finding the way in an organization that was fighting back on doing the right thing, and refusing feedback.

I can, in the background, remember to pat myself on the back, and acknowledge that great things happen because I facilitate uncomfortable discussions and practical steps forward. Testing is a great way of doing that. But all too often, it is also a great way of keeping yourself in the shadows, assigning praise where it wouldn't be without you.

Assigning credit is hard. We need to learn to appreciate the whole village.

Wednesday, November 7, 2018

Achievements of a Silo

Once upon a time there was a company, much like many other companies yet unique in many ways. As companies do, they hired some great people in different teams. There was one thing in common: all the people were awesome. But the people came from very different backgrounds and ideas.

In one of the teams where some great people in testing landed, the testers were feeling frustrated. With a new team and no infrastructure for builds and test automation yet features flying around being implemented and tested, they found it hard to take the time and focus they felt they needed. So as some great people do, they actively drove forward a solution: they created a new team on the side, with focus on just creating the infrastructure and dropped all the in-team work of testing they had managed to get started. Without facilitation, the in-team testing work turned into tiny, focused on units and components, and perspectives around value and system vanished in hopes of someone else picking them up, like magic.

With the new team and new focus, the great people made great progress. They set up a fancy pipeline with all sorts of fancy tests, and a lovely set of images and documents to share what a great machinery they had built. Where ever this new team showed up, they remembered to tell how well they did, and all the awesome stuff the machinery now made available with sample tests of all sorts that the pipeline theoretically should hold.

The original team focusing on features were handed the great machinery with high hopes for expanding it. The machinery building team built more machinery, on the side of the machinery being used for real projects.

The fun part of this fable arrives after many months has passed. The overall project with the lost focus of who owns system perspectives was struggling a bit, and it became obvious that getting a perspective into readiness wasn't an easy task. So as companies do, a meeting was called.

In the meeting, the machinery team presented all the great things they had built, and great they were. With every example of what was built as example into the machinery, the team focusing on features brought today's reality. That test job - turned off as it broke. Same with the other. And another. All the great things the machinery promised, none of it was realized in practice.

Lesson of this story is: it's not about your team's output, but about the outcome of all the different teams together. You can create the shiniest machinery there is, but if it is not used, and if relevant parts of it in real use get turned off, your proof of concept running all the shiny things provided very little value. It may have taught the great people in the machinery team some valuable personal lessons in the technical perspective. What it should teach is that value of whatever we are building comes from the use of it.

I'm a big believer of teams actively participating in building their continuous integration machinery, and slightly loath people who believe that learning together while building it, while taking it into use isn't needed because someone else could do the learning for you.

Learning with you is possible, for you is not. Achievement in silo often end up worth a little. 

Thursday, October 25, 2018

GDPR got my skeleton removed

On September 29th I wrote a blog post about my ex not deleting "relationship materials" on request. I felt, and still strongly feel that delete on request is what is morally right. Normally I recognize he wouldn't be legally obligated. Pictures and texts could be considered as gift you cannot reclaim, regardless of how much emotional suffering their existence causes.

On October 10th, he confirmed he had deleted the stuff. The confirmation was short: "Done." No response to me checking: "Really? Thank you."

Today I had my chance of asking for final confirmation in a mediated call. The material is deleted.

But I asked a followup question: what made you change your mind? And the response is simultaneously sad and delightful.

What changed his mind was not the fact that many of his friends as well as internet strangers in the software community reached out to talk on my behalf. It was not that he would have cared that I was struggling with nightmares of him raping me that I couldn't control.

It was that I realized that the address I used to share those materials is a company address. And it is a company address of his family's company with relevant risk of damage. Under GDPR, I have the right to request deletion of those materials and that is what I did on October 10th, as I woke up in the middle of the night to yet another one of those awful nightmares. I emailed polite requests for deletion to the company he uses for email and his own privately owned company, to get the "Done." a few hours after.

He expressed I threatened his family, but there was nothing threatening on the request to delete the materials. Delete and all is well. There would be nothing threatening on the potential consequences either, unless my request to delete was actually valid - which it was. He was illegally holding private material on company computers. GDPR comes to play.

Lessons learned:

  1. GDPR actually works to get private data deleted. Thank you European Union. And thank you for my testing profession of keeping me well aware of what this piece of legislation means. 
  2. Private materials belongs in private computers. The traveling consultants all purpose computer for all things private and professional is a bad choice if you want to keep that stuff legally. 
  3. People you care for can really disappoint you big time. But when a door closes, another opens. 

Saturday, October 13, 2018

Finding the work that needs doing in a multi-team testing

There's a pattern forming in front of my eyes that I've been trying to see clearly and understand for a good decade. This is a pattern of figuring out how to be a generalist while test specialist in an agile team working on a bigger system, meaning multiple teams work on the same code base. What is it that the team, with you coaching, leading and helping them as test specialist is actually responsible for testing?

The way work is organized around me is that I work with a lovely team of 12 people. Officially, we are two teams but we put ourselves all together to have flexibility to organize for smaller groups around features or goals as we see fit. If there is anything defining where we tend to draw our box that we are not limited by, it is drawn around what we call clients. These clients are C++ components and anything and everything needed to support those clients development.

This is a not a box my lovely 12 occupies alone. There's plenty of others. The clients include a group of service components that have since age of time been updated more like hourly and while I know some people working on those service components, there's just too many of them. And the other components I find us working on, it is not like we'd be the only ones working on them. There's two clear other client product groups in the organization and we happily share code bases with them while making distinct yet similar products out of them. And to not make it too simple, each of the distinct products comprise a system with another product that is obviously different for all three of us, and that system is the system our customers identify our product with.

So we have:
  • service components
  • components
  • applications
  • features
  • client products
  • system products
When I come in as tester, I come in to be caring for the system products from client products perspective. That means that to find some of the problems I am seeking, I will need to use something my team isn't developing to find problems that are in the things my team is developing. And as I find something, it really does no longer matter who will end up fixing it.

We also work with a principle of internal open source project. Anyone - including me - in the organization can go do a pull request to any of the codebases. Obviously there's many of them, and they are in a nice variety of languages meaning what I am allowed to do and what I am able to do can end up being very different.

Working with testing of a team that has this kind of responsibility isn't always straightforward. The communication patterns are networked and sometimes finding out what needs doing feels like a puzzle to solve where all pieces are different but look almost identical. To describe this, I went to identify different sources of testing tasks for our responsibilities. We have:
  • Code Guardianship (incl. testing) and Maintenance of a set of client product components. This means we own some C++ and C# code and the idea that it works
  • Code Guardianship and Maintenance of a set of support components. This means we own some Python code that keeps us running, a lot of it being system test code. 
  • Security Guardianship of a client product and all of its components, including ones we don't own. 
  • Implementing and testing changes to any necessary client product or support components. This means that when a team member in our team goes and changes something others guard, we go as team and ensure our changes are tested. The maintenance stays elsewhere, but all the other things we contribute.
  • End to end feature Guardianship and System Testing for a set of features. This means we see in our testing a big chunk of end users experience and drive improvements to it cross-team. 
  • Test all features for remote manageability. This means for each feature, there a way of using that feature that the other teams won't cover but we will. 
  • Test other teams features in the context of this product to some extent. This is probably the most fuzzy thing we do. 
  • All client product maintenance first point of support. If it does not work, we figure out how and who in our ecosystem could get to fixing it. 
  • Releases. When it's all been already tested, we make the selections of what goes out and when and do all the practicalities around it. 
  • Monitoring in production. We don't stop testing when we release, but continue with monitoring and identifying improvement needs.
To do my work, I follow my developers RSS feeds in addition to talking with them. But I also follow a good number (60+) components and changes going into those. There is no way anymore Jira could provide me the context of the work we're responsible for, and how that flows forward. 

I see others clinging to Jira with the hope that someone else tells them exactly what to do. And in some teams, someone does. That's what I call my "soul sucking place". I would be crushed if my work was defined to do that work identification for others. My good place is where we all know the rules of how to discover the work and volunteer for it. And how to prioritize it, what of it we can skip for low risks related to others already doing some of it. 

The worst agile testing I did was when we thought the story was all there is. 

Thursday, October 11, 2018

How to Survive in a Fast Paced World Without Being Shallow


As we were completing an exercise into analyzing a tiny application on how would we test it, my pair looked slightly worn out and expressed their concern on going deeper in testing - time. It felt next to impossible to find time to do all the work that needed doing in the last paced agile, changes and deliveries, stories swooshing by. Just covering the basics of everything was a full time work!

I recognized the feeling, and we had a small chat on how I had ended up solving it by sharing much of the testing with my teams developers, to an extent where I might not show up for a story enough to hear it swoosh by. Basic story testing might not be my choice of time, as I have a choice. And right now I have more choices than ever, being the manager of all the developers.

**Note: the developers I have worked with in the two last places I work in are amazing testers, and this is  because I don't hog the joy of testing from them but allow them to contribute to the full. Using my managerial powers to force testing on them is a joke. Even if it has a little truth into it. 

Even with developers doing all the testing they can do, I still have stuff to test as a specialist in testing. And that stuff is usually the things developers have not (yet) learned to pay attention to.

For browser-based applications, I find myself spending time browsers other than developer's favorite and with browser features set away from usual defaults.

For our code and functionality, I find myself spending time interrogating the other software that could reside in the same environment, competing for attention. Some of my coolest bugs are in this category.

For lacking value on anything, I find myself spending time using the application after it has been released, combining analytics and production environment use in my exploration.

To describe my tactic of testing, I was explaining the overall coverage that I am aware of and then choosing my efforts in a very specific pattern. I would first do something simple to show myself that it can work, to make sure I understand what we've built on a shallow level. Then I leave the middle ground of covering stuff for others. Finally, I focus my own efforts into adding things I find likely that others have missed.

This is patchy testing. It's the way I go deep in a fast based world so that I don't have to test everything in a shallow way.

Make a pick and remember: with continuous delivery, you are never really out of time for going deeper to test something. That information is still useful in future cycles of releasing. At least if you care about your users.


Saturday, October 6, 2018

Time warp to the Principle of Opportunity Cost

This Friday marked a significant achievement: we had 5-figure numbers of users on the very latest versions of the software we worked on every single day. Someone asked about time from idea to production, and learned this took us seven years. I was humbled to realize that while I has only been a part of two, I had pretty much walked through the whole path of implementing & testing and incremental delivery to get where we were.

When I worked at the same company on sort-of-same products over 12 years ago, one of the projects we then completed was something we called WinCore. Back then the project involved combining ideas of a product line and agile to have a shared codebase from which to build all the different Windows products from, I remember frustrations around testing. Each product built from the product line had pieces of configurations that were essentially different. This usually meant that as one product was in the process of releasing, they would compromise the others needs - for the lack of immediate feedback on what they broke.

Looking at today, test automation (and build automation) has been transformative. The immediate feedback on breaking something others rely on has resulted in a very different prioritization scheme that balances the needs of the still three products we're building.

The products are sort-of-same meaning that while I last looked at them from a consumer point of view, this time I represent the corporate users. While much of the code base servers similar purposes as back then for the users, it has also been pretty much completely rewritten since, and has more things it does than it did back then. A lot of the change has happened so that testing and delivering value would flow better.

Looking at the achievement takes me back to thinking of what the 12-years younger version of me was doing as a tester, compared to the older version of me.

The 12-years younger version of me used her time differently:

  • She organized meetings, and participated in many. 
  • She spoke with people about importance of exploratory testing with emphasis of risks in automation, how it could fail.
  • She was afraid of developers and treated them as people with higher status, and carefully considered when interrupting them was a thing to do.
  • She created plans and schedules, estimated and used efforts to protect the plans with metrics
The 12-years older version of me makes different choices:
  • Instead of being present in meetings, she sits amongst people of other business units doing her own testing work for serendipitous 1:1 communication. 
  • She speaks for the importance of automation, and drives it actively and incrementally forward avoiding the risks she used to be concerned for. She still finds time to spend hands-on exploratory testing, finding things that would otherwise be missed. 
  • She considers fixing and delivering so important that she'll interrupt a developer if she sees anything worth reporting. She isn't that different from the developers, especially on the goals that are all common and shared.
  • She drives incremental delivery in short timeframes that removes the need of plans and estimates, and creates no test metrics.
Opportunity cost is the idea that your choices as an individual employee matter. What value you choose to focus on matters. You can choose to invest in meetings or on 1:1 communications. You can choose to invest in warning about risk or making sure the risks don't realize. You can choose to test manually or create automation scripts. When you're doing something, you are not doing something else. 

Are you in control of your choices, or are someone else's choices controlling you? You're building your future today, are you investing in the better future or just surviving with today? 



Wednesday, October 3, 2018

Chartering for Exploratory Testing

As exploratory testing is framed around learning and discovery, done by a person, it is unnatural to split it as per test cases and instead we use time, often referred to as session. Some folks have given suggestions that a session (time-box) is uninterrupted and focused, and that is quite natural thinking of the learning nature of exploratory testing. If you find yourself distracted and interrupted, the likelihood of doing the same starting work many times and not making much of a progress is high. There's different ideas of what the uninterrupted time can be, and also on what types of interruptions really matter so much that you need to break out of your reporting unit.

Some talk about doing a pomodoro - 25 minutes referring to research on how us people can focus. Some talk about at most 2 hours. My personal preference is to deal with a unit of "days of work" or at most "before lunch" and "after lunch" half days and mind a little less about the interruptions.

With session as the unit, before going into that unit of time, it makes sense to stop and think about what would you be doing. Since test cases make little sense, in exploratory testing we've come to talk about charters. Charter is an idea guiding you while you are going into exploration. What would you try to do? What would you focus on? How would you tell if you're done as in task completed, or done as it time run out?

Elisabeth Hendrickson proposed in her book Explore IT a template that would be helpful in agile all-team sharing exploring type of context in sharing the ideas of what needs to be tested with charters. The template to help thinking is:
Explore . . .
With . . .
To discover . . .
I’ve not cared much for the charter template, and rather than looking for a particular form of a charter, I rather think of the timeframe and goal setting for myself. I have no issues of using a user story as my charter, and even using the same user story with an idea of paying attention to a particular perspective on consecutive sessions. A lot of times I cannot even say I have a charter for a specific session other than “get started with testing, figure out what you got done”.

Today, my team’s tester brought in a list of features and perspectives. They were not organized as charter, but it was clear that they could have been. But that would have meant then they would be fixing their ideas of how they combine them prematurely. Sometimes the need to charter (in writing) in agile teams is creating this idea of “check this, done”, where each of them is an open ended quest for information, and can / should both create new charters and transform older charters into something better using the learning the testing done is giving.

If I write charters, I write one for each who is testing, and debrief to create the next ones after the first ones are completed.

A lot of times I don’t need to share charters exploring with others. I need to share questions, ideas of documentation (automation), and and bugs.

There is a problem before chartering where a lot of testers stumble, as per my observation - having the skills to generate versatile ideas. I was watching a candidate for a job today test in front of my eyes and slightly surprised on the low number of ideas they would consider given an application, expecting a specification to prompt them for all things relevant. At best times, spec exists and is useful, never complete. Charters are only as good as the ideas we have to put into them. 

Deep Testing and Test Levels

Back in my days of 2002, I have written an article for an academic conference that basically centered around the idea that test levels (as they were taught much then without a "test automation pyramid") while not time-based are useful in agile. These days, I rarely speak of this idea any more, but it is a foundation I speak from.

I came back to think about this after my Deep Testing post a few days ago, as Lisa shared:

Since I have written about the very same levels, I felt like I wanted to express how I model test levels as a very different idea than the depth of testing. Depth works as a synonym for words like "bad quality" = shallow and "good quality" = deep, and multi-dimensional coverage. Levels as a concept for me is both more shallow and serves a different purpose.

Levels of testing tell me that as an observer of testing, there is one helpful set of glasses I can wear to notice information about the system. Looking at the details of the leaf in a tree, it may be hard for me to appreciate what makes up the tree and why it matters, or how trees make up a forest or how forests belong into the world as lungs of it. Looking at things on different levels leads me to generate a little bit different ideas. I may or may not act on those ideas. I may or may not recognize that those ideas even exist.

That is where depth comes in. If I don't have the skill to use the heuristic of levels to see things, my testing, even if it happens on all of the different levels is shallow. It finds easy to spot bugs, that I'm ready to spot with the learning of the system I have done so far.

Depth speaks about my perceptions of trustworthiness of the testing performed. Shallow is testing that you perform with your mind's eye more closed, with single heuristic applied and not doing complex modeling on multiple dimensions. Deep is testing you do that finds more of the important things, things that are not straightforward, things that are not just stuff users find when left alone, but that users trip on when you watch them using the system and they don't even understand they could be asking for more and better. Deep testing is for the problems where your system is down for 5 minutes and everyone just accepts that because no one can reproduce how you get there and why no one even needs to do anything to recover from the problem. Users just know to go for coffee when that happens.


Tuesday, October 2, 2018

Finding bugs serendipitously

Serendipity means 'lucky accident'. As I speak of doing shallow exploratory testing, a colleague expressed their fear of finding all the bugs they find serendipitously.
"I feel like most of my bugs are serendipitous, and that concerns me."
I wanted to share a story, and a perspective.

As I joined a new job, the one before this, I was determined to do hands-on good quality testing on my first week at the new job. I've had the experiences of joining companies before, where I find myself being trained into the company, without actually doing any of the work I was hired for in the first weeks. And I wanted things to be different. I wanted the old saying of "people taking months in a new job before they are productive" not to be true and set that out as my goal.

As I arrived office, they gave me access to the system I was to test. I could barely get my computer open, log into the system with my credentials, bookmark the page to remember where the system was and I was already dragged into a four-something-hour meeting spree where they poured information into my head I have absolutely no recollection on.

In the afternoon, I returned to my computer with the original determination, and I opened the application only to see a big visible crash.


I had done NOTHING. No use of my brilliant testing skills. Very very shallow testing at best. Anyone would see this problem. Except they did not.

I had serendipitously found a bug of linking one particular subpage of the application (I had managed to click ONE thing after logging in before linking it) that crashed when login was no longer valid, and when we investigated the bug with the developers, we learned it was also the ONLY subpage of that type. I honestly got lucky that day, but I would have over time increased my likelihood of running into this with the ideas to do exactly this I was in control of because of the Elisabeth Hendrickson's Cheat Sheet. 

A lot of the depth in testing comes with skill, and knowing how to exercise a variety of ideas. But much of it also comes from serendipity combined with recognizing problems when you see them (a skill!) and just sticking with the applications longer. 

Serendipity sounds like just luck, but it is particular kind of luck combined with skill and perseverance.