Thursday, October 25, 2018

GDPR got my skeleton removed

On September 29th I wrote a blog post about my ex not deleting "relationship materials" on request. I felt, and still strongly feel that delete on request is what is morally right. Normally I recognize he wouldn't be legally obligated. Pictures and texts could be considered as gift you cannot reclaim, regardless of how much emotional suffering their existence causes.

On October 10th, he confirmed he had deleted the stuff. The confirmation was short: "Done." No response to me checking: "Really? Thank you."

Today I had my chance of asking for final confirmation in a mediated call. The material is deleted.

But I asked a followup question: what made you change your mind? And the response is simultaneously sad and delightful.

What changed his mind was not the fact that many of his friends as well as internet strangers in the software community reached out to talk on my behalf. It was not that he would have cared that I was struggling with nightmares of him raping me that I couldn't control.

It was that I realized that the address I used to share those materials is a company address. And it is a company address of his family's company with relevant risk of damage. Under GDPR, I have the right to request deletion of those materials and that is what I did on October 10th, as I woke up in the middle of the night to yet another one of those awful nightmares. I emailed polite requests for deletion to the company he uses for email and his own privately owned company, to get the "Done." a few hours after.

He expressed I threatened his family, but there was nothing threatening on the request to delete the materials. Delete and all is well. There would be nothing threatening on the potential consequences either, unless my request to delete was actually valid - which it was. He was illegally holding private material on company computers. GDPR comes to play.

Lessons learned:

  1. GDPR actually works to get private data deleted. Thank you European Union. And thank you for my testing profession of keeping me well aware of what this piece of legislation means. 
  2. Private materials belongs in private computers. The traveling consultants all purpose computer for all things private and professional is a bad choice if you want to keep that stuff legally. 
  3. People you care for can really disappoint you big time. But when a door closes, another opens. 

Saturday, October 13, 2018

Finding the work that needs doing in a multi-team testing

There's a pattern forming in front of my eyes that I've been trying to see clearly and understand for a good decade. This is a pattern of figuring out how to be a generalist while test specialist in an agile team working on a bigger system, meaning multiple teams work on the same code base. What is it that the team, with you coaching, leading and helping them as test specialist is actually responsible for testing?

The way work is organized around me is that I work with a lovely team of 12 people. Officially, we are two teams but we put ourselves all together to have flexibility to organize for smaller groups around features or goals as we see fit. If there is anything defining where we tend to draw our box that we are not limited by, it is drawn around what we call clients. These clients are C++ components and anything and everything needed to support those clients development.

This is a not a box my lovely 12 occupies alone. There's plenty of others. The clients include a group of service components that have since age of time been updated more like hourly and while I know some people working on those service components, there's just too many of them. And the other components I find us working on, it is not like we'd be the only ones working on them. There's two clear other client product groups in the organization and we happily share code bases with them while making distinct yet similar products out of them. And to not make it too simple, each of the distinct products comprise a system with another product that is obviously different for all three of us, and that system is the system our customers identify our product with.

So we have:
  • service components
  • components
  • applications
  • features
  • client products
  • system products
When I come in as tester, I come in to be caring for the system products from client products perspective. That means that to find some of the problems I am seeking, I will need to use something my team isn't developing to find problems that are in the things my team is developing. And as I find something, it really does no longer matter who will end up fixing it.

We also work with a principle of internal open source project. Anyone - including me - in the organization can go do a pull request to any of the codebases. Obviously there's many of them, and they are in a nice variety of languages meaning what I am allowed to do and what I am able to do can end up being very different.

Working with testing of a team that has this kind of responsibility isn't always straightforward. The communication patterns are networked and sometimes finding out what needs doing feels like a puzzle to solve where all pieces are different but look almost identical. To describe this, I went to identify different sources of testing tasks for our responsibilities. We have:
  • Code Guardianship (incl. testing) and Maintenance of a set of client product components. This means we own some C++ and C# code and the idea that it works
  • Code Guardianship and Maintenance of a set of support components. This means we own some Python code that keeps us running, a lot of it being system test code. 
  • Security Guardianship of a client product and all of its components, including ones we don't own. 
  • Implementing and testing changes to any necessary client product or support components. This means that when a team member in our team goes and changes something others guard, we go as team and ensure our changes are tested. The maintenance stays elsewhere, but all the other things we contribute.
  • End to end feature Guardianship and System Testing for a set of features. This means we see in our testing a big chunk of end users experience and drive improvements to it cross-team. 
  • Test all features for remote manageability. This means for each feature, there a way of using that feature that the other teams won't cover but we will. 
  • Test other teams features in the context of this product to some extent. This is probably the most fuzzy thing we do. 
  • All client product maintenance first point of support. If it does not work, we figure out how and who in our ecosystem could get to fixing it. 
  • Releases. When it's all been already tested, we make the selections of what goes out and when and do all the practicalities around it. 
  • Monitoring in production. We don't stop testing when we release, but continue with monitoring and identifying improvement needs.
To do my work, I follow my developers RSS feeds in addition to talking with them. But I also follow a good number (60+) components and changes going into those. There is no way anymore Jira could provide me the context of the work we're responsible for, and how that flows forward. 

I see others clinging to Jira with the hope that someone else tells them exactly what to do. And in some teams, someone does. That's what I call my "soul sucking place". I would be crushed if my work was defined to do that work identification for others. My good place is where we all know the rules of how to discover the work and volunteer for it. And how to prioritize it, what of it we can skip for low risks related to others already doing some of it. 

The worst agile testing I did was when we thought the story was all there is. 

Thursday, October 11, 2018

How to Survive in a Fast Paced World Without Being Shallow


As we were completing an exercise into analyzing a tiny application on how would we test it, my pair looked slightly worn out and expressed their concern on going deeper in testing - time. It felt next to impossible to find time to do all the work that needed doing in the last paced agile, changes and deliveries, stories swooshing by. Just covering the basics of everything was a full time work!

I recognized the feeling, and we had a small chat on how I had ended up solving it by sharing much of the testing with my teams developers, to an extent where I might not show up for a story enough to hear it swoosh by. Basic story testing might not be my choice of time, as I have a choice. And right now I have more choices than ever, being the manager of all the developers.

**Note: the developers I have worked with in the two last places I work in are amazing testers, and this is  because I don't hog the joy of testing from them but allow them to contribute to the full. Using my managerial powers to force testing on them is a joke. Even if it has a little truth into it. 

Even with developers doing all the testing they can do, I still have stuff to test as a specialist in testing. And that stuff is usually the things developers have not (yet) learned to pay attention to.

For browser-based applications, I find myself spending time browsers other than developer's favorite and with browser features set away from usual defaults.

For our code and functionality, I find myself spending time interrogating the other software that could reside in the same environment, competing for attention. Some of my coolest bugs are in this category.

For lacking value on anything, I find myself spending time using the application after it has been released, combining analytics and production environment use in my exploration.

To describe my tactic of testing, I was explaining the overall coverage that I am aware of and then choosing my efforts in a very specific pattern. I would first do something simple to show myself that it can work, to make sure I understand what we've built on a shallow level. Then I leave the middle ground of covering stuff for others. Finally, I focus my own efforts into adding things I find likely that others have missed.

This is patchy testing. It's the way I go deep in a fast based world so that I don't have to test everything in a shallow way.

Make a pick and remember: with continuous delivery, you are never really out of time for going deeper to test something. That information is still useful in future cycles of releasing. At least if you care about your users.


Saturday, October 6, 2018

Time warp to the Principle of Opportunity Cost

This Friday marked a significant achievement: we had 5-figure numbers of users on the very latest versions of the software we worked on every single day. Someone asked about time from idea to production, and learned this took us seven years. I was humbled to realize that while I has only been a part of two, I had pretty much walked through the whole path of implementing & testing and incremental delivery to get where we were.

When I worked at the same company on sort-of-same products over 12 years ago, one of the projects we then completed was something we called WinCore. Back then the project involved combining ideas of a product line and agile to have a shared codebase from which to build all the different Windows products from, I remember frustrations around testing. Each product built from the product line had pieces of configurations that were essentially different. This usually meant that as one product was in the process of releasing, they would compromise the others needs - for the lack of immediate feedback on what they broke.

Looking at today, test automation (and build automation) has been transformative. The immediate feedback on breaking something others rely on has resulted in a very different prioritization scheme that balances the needs of the still three products we're building.

The products are sort-of-same meaning that while I last looked at them from a consumer point of view, this time I represent the corporate users. While much of the code base servers similar purposes as back then for the users, it has also been pretty much completely rewritten since, and has more things it does than it did back then. A lot of the change has happened so that testing and delivering value would flow better.

Looking at the achievement takes me back to thinking of what the 12-years younger version of me was doing as a tester, compared to the older version of me.

The 12-years younger version of me used her time differently:

  • She organized meetings, and participated in many. 
  • She spoke with people about importance of exploratory testing with emphasis of risks in automation, how it could fail.
  • She was afraid of developers and treated them as people with higher status, and carefully considered when interrupting them was a thing to do.
  • She created plans and schedules, estimated and used efforts to protect the plans with metrics
The 12-years older version of me makes different choices:
  • Instead of being present in meetings, she sits amongst people of other business units doing her own testing work for serendipitous 1:1 communication. 
  • She speaks for the importance of automation, and drives it actively and incrementally forward avoiding the risks she used to be concerned for. She still finds time to spend hands-on exploratory testing, finding things that would otherwise be missed. 
  • She considers fixing and delivering so important that she'll interrupt a developer if she sees anything worth reporting. She isn't that different from the developers, especially on the goals that are all common and shared.
  • She drives incremental delivery in short timeframes that removes the need of plans and estimates, and creates no test metrics.
Opportunity cost is the idea that your choices as an individual employee matter. What value you choose to focus on matters. You can choose to invest in meetings or on 1:1 communications. You can choose to invest in warning about risk or making sure the risks don't realize. You can choose to test manually or create automation scripts. When you're doing something, you are not doing something else. 

Are you in control of your choices, or are someone else's choices controlling you? You're building your future today, are you investing in the better future or just surviving with today? 



Wednesday, October 3, 2018

Chartering for Exploratory Testing

As exploratory testing is framed around learning and discovery, done by a person, it is unnatural to split it as per test cases and instead we use time, often referred to as session. Some folks have given suggestions that a session (time-box) is uninterrupted and focused, and that is quite natural thinking of the learning nature of exploratory testing. If you find yourself distracted and interrupted, the likelihood of doing the same starting work many times and not making much of a progress is high. There's different ideas of what the uninterrupted time can be, and also on what types of interruptions really matter so much that you need to break out of your reporting unit.

Some talk about doing a pomodoro - 25 minutes referring to research on how us people can focus. Some talk about at most 2 hours. My personal preference is to deal with a unit of "days of work" or at most "before lunch" and "after lunch" half days and mind a little less about the interruptions.

With session as the unit, before going into that unit of time, it makes sense to stop and think about what would you be doing. Since test cases make little sense, in exploratory testing we've come to talk about charters. Charter is an idea guiding you while you are going into exploration. What would you try to do? What would you focus on? How would you tell if you're done as in task completed, or done as it time run out?

Elisabeth Hendrickson proposed in her book Explore IT a template that would be helpful in agile all-team sharing exploring type of context in sharing the ideas of what needs to be tested with charters. The template to help thinking is:
Explore . . .
With . . .
To discover . . .
I’ve not cared much for the charter template, and rather than looking for a particular form of a charter, I rather think of the timeframe and goal setting for myself. I have no issues of using a user story as my charter, and even using the same user story with an idea of paying attention to a particular perspective on consecutive sessions. A lot of times I cannot even say I have a charter for a specific session other than “get started with testing, figure out what you got done”.

Today, my team’s tester brought in a list of features and perspectives. They were not organized as charter, but it was clear that they could have been. But that would have meant then they would be fixing their ideas of how they combine them prematurely. Sometimes the need to charter (in writing) in agile teams is creating this idea of “check this, done”, where each of them is an open ended quest for information, and can / should both create new charters and transform older charters into something better using the learning the testing done is giving.

If I write charters, I write one for each who is testing, and debrief to create the next ones after the first ones are completed.

A lot of times I don’t need to share charters exploring with others. I need to share questions, ideas of documentation (automation), and and bugs.

There is a problem before chartering where a lot of testers stumble, as per my observation - having the skills to generate versatile ideas. I was watching a candidate for a job today test in front of my eyes and slightly surprised on the low number of ideas they would consider given an application, expecting a specification to prompt them for all things relevant. At best times, spec exists and is useful, never complete. Charters are only as good as the ideas we have to put into them. 

Deep Testing and Test Levels

Back in my days of 2002, I have written an article for an academic conference that basically centered around the idea that test levels (as they were taught much then without a "test automation pyramid") while not time-based are useful in agile. These days, I rarely speak of this idea any more, but it is a foundation I speak from.

I came back to think about this after my Deep Testing post a few days ago, as Lisa shared:

Since I have written about the very same levels, I felt like I wanted to express how I model test levels as a very different idea than the depth of testing. Depth works as a synonym for words like "bad quality" = shallow and "good quality" = deep, and multi-dimensional coverage. Levels as a concept for me is both more shallow and serves a different purpose.

Levels of testing tell me that as an observer of testing, there is one helpful set of glasses I can wear to notice information about the system. Looking at the details of the leaf in a tree, it may be hard for me to appreciate what makes up the tree and why it matters, or how trees make up a forest or how forests belong into the world as lungs of it. Looking at things on different levels leads me to generate a little bit different ideas. I may or may not act on those ideas. I may or may not recognize that those ideas even exist.

That is where depth comes in. If I don't have the skill to use the heuristic of levels to see things, my testing, even if it happens on all of the different levels is shallow. It finds easy to spot bugs, that I'm ready to spot with the learning of the system I have done so far.

Depth speaks about my perceptions of trustworthiness of the testing performed. Shallow is testing that you perform with your mind's eye more closed, with single heuristic applied and not doing complex modeling on multiple dimensions. Deep is testing you do that finds more of the important things, things that are not straightforward, things that are not just stuff users find when left alone, but that users trip on when you watch them using the system and they don't even understand they could be asking for more and better. Deep testing is for the problems where your system is down for 5 minutes and everyone just accepts that because no one can reproduce how you get there and why no one even needs to do anything to recover from the problem. Users just know to go for coffee when that happens.


Tuesday, October 2, 2018

Finding bugs serendipitously

Serendipity means 'lucky accident'. As I speak of doing shallow exploratory testing, a colleague expressed their fear of finding all the bugs they find serendipitously.
"I feel like most of my bugs are serendipitous, and that concerns me."
I wanted to share a story, and a perspective.

As I joined a new job, the one before this, I was determined to do hands-on good quality testing on my first week at the new job. I've had the experiences of joining companies before, where I find myself being trained into the company, without actually doing any of the work I was hired for in the first weeks. And I wanted things to be different. I wanted the old saying of "people taking months in a new job before they are productive" not to be true and set that out as my goal.

As I arrived office, they gave me access to the system I was to test. I could barely get my computer open, log into the system with my credentials, bookmark the page to remember where the system was and I was already dragged into a four-something-hour meeting spree where they poured information into my head I have absolutely no recollection on.

In the afternoon, I returned to my computer with the original determination, and I opened the application only to see a big visible crash.


I had done NOTHING. No use of my brilliant testing skills. Very very shallow testing at best. Anyone would see this problem. Except they did not.

I had serendipitously found a bug of linking one particular subpage of the application (I had managed to click ONE thing after logging in before linking it) that crashed when login was no longer valid, and when we investigated the bug with the developers, we learned it was also the ONLY subpage of that type. I honestly got lucky that day, but I would have over time increased my likelihood of running into this with the ideas to do exactly this I was in control of because of the Elisabeth Hendrickson's Cheat Sheet. 

A lot of the depth in testing comes with skill, and knowing how to exercise a variety of ideas. But much of it also comes from serendipity combined with recognizing problems when you see them (a skill!) and just sticking with the applications longer. 

Serendipity sounds like just luck, but it is particular kind of luck combined with skill and perseverance. 

Monday, October 1, 2018

Deep Exploratory Testing

There's a famous saying by Linus Torvalds:
Given enough eyeballs, all bugs are shallow. 
Crowdsourcing references often like to quote this, pointing out that out of the bugs we could find in testing, the users in production end up finding over masses all the relevant ones, even if they did not report. A crowd could do well in hitting a bunch of bugs.

For the purposes of me doing and guiding exploratory testing, I find it really beneficial to think in terms of shallow vs. deep testing. Shallow can be done with less skills, and with less time. Deep testing requires more skills, more insights, a foundation of learning that is built in layers and requires time.

Many people find that agile somehow guides them to only doing shallow testing. They feel their testing is always squeezed to the end of the sprints, and that it is so that development schedule is flexible, while testing schedule is fixed. However, they may fail to see the opportunity of testing continuing after the release, focusing on going deeper.

Shallow testing find shallow bugs. Shallow bugs are easy to find, they are obvious and would become a problem in production immediately. Deep testing finds deep bugs. It may lead us shallow bugs that just take a bit more of an effort to see, combinations and conditions that take time to set up. But it also may lead us to bugs some don't consider bugs: things that threaten the value of the product, things that should be different to be better.

Going deep happens in layers. You don't repeat the same, but you go further, deeper. You start before implementing. You continue while implementing. You don't stop for releasing. You don't have to, because you are not on a project. Agile made it a continuous process where there is no end.


Sum it up, and it totals to deep testing. Miss the skills, and all you get is shallow. The additive way of doing testing is not regression testing. It is finding new perspectives and exploratory testing is the core practice in doing that.