Tuesday, February 23, 2021

End to End Testing as a Test Strategist

For the last 10 months, I've worked on a project that assigned me end to end testing responsibility. As I joined the project, I had little clue on what the work would end up being. I trusted that like all other work, things will sort out actively learning about it. 

Armed with the idea of job crafting ('make the job you have the job you want') I knew what I wanted: 

  • principal engineer position with space for hands-on work
  • making an impact, being responsible
  • working with people, not alone
With a uniquely open mandate, I figured out my role would be a mix of three things:
  1. tester within a team
  2. test manager within a project delivering a system
  3. test coach within the organization 
The learnings have been fascinating. 

My past experience of growing teams into being responsible for the system rather than their own components was helpful, and resulted in seeing both confusion and importance of confusion of one team member choosing scope of work bigger than the team. 

I had collected more answers to why than many people with longer history in the company, focusing on connections. The connections enabled me to prioritize the testing I did, and ask others for testing I wasn't going to do myself. 

The way I chose to do end to end testing differed from what my predecessor had chosen it to be for them, as I executed a vision of getting end-to-end tested, but in small pieces

I focused on enabling continuous delivery where I could, taking pride in the work where my team was able to release their part of the system 26 times in 2020. 

I focused on enabling layering of testing, having someone else test things that were already tested. It proved valuable often, and enabled an approach to testing where we are stronger together. The layered approach allowed me to experience true smoke testing (finding hardware problems in incremental system testing).

The layered approach also helped me understand where my presence was needed most for the success of the project, and move my tester presence from one team in the project to another half-way through. I believe this mobility made a difference in scale as I could help critical parts from within. 

I came to appreciate conversations as part of figuring out who would test things. I saw amazing exploratory testing emerge done by developers following hunches of critical areas of risks from those conversations. 

Instead of taking my end to end testing responsibility as a call to test, I took it as a call to facilitate. While I have used every single interface the system includes (and that is many!), every interface has someone who knows it deeper and has local executable documentation - test automation - that captures their concerns as well as many of mine. 

Looking back, it has been an effort to do end to end testing as a test strategist. You can't strategize about testing without testing. My whole approach to finding the layers is founded on the idea that I test alongside the others. Not just the whole team, but all of the teams, and in a system of size, that is some ground to cover. 


Sunday, February 14, 2021

Three routes to fixing problems test cases hide

I'm a big fan of exploratory testing, which often means I have reservations about test cases or at least ideas of how to interpret test cases in a way that does not require such an intensive investment into writing things down, that help us write things down that are needed and how to not think of test cases - or any documentation for that matter - to represent all the testing we are doing. 

Today I wanted to share three experiences from my career from three different organizations on how we tweaked our test cases to fix a problem all three organizations shared: using a lot of time for testing, but leaking significant bugs we believed we should be able to find. 

Organization 1: Business Acceptance Testing

The first example is from an organization where I managed business acceptance testing. I was working with two different projects, moving the business acceptance testing phase from months after months endeavor to something that would fit 30-days. One of my projects had a history of writing detailed test cases, the other had a history of not really using test cases. In getting the timeframe condensed, understanding what we had in mind to do and being able to reprioritize was essential. 

For both projects, we used Quality Center (HPs ALM solution was called that back then). Both projects started with test data in mind, and that is what we used as a starting point for our tests. We selected our test data to a set of criteria, wrote the criteria down on the test case title summarizing the business need for that particular data type. And as test steps, we used Quality Center's concept of test templates - a reusable set of steps that described the processes the two teams were running on a high level, same for every single test case. 

Thus our test cases were titles, with template test checklists to help us analyze and reprioritize our work. Same looking tests on first week could take a day, and later in the cycle, we could spend 15 minutes. The test case looked same, but we used it different, to explore. 

On one of the two projects, they had a history of writing test cases where steps also described the detail, and were concerned that giving those up may mean they forget to cover something as the information of changes isn't easy to pass around for a whole group doing acceptance testing. So we split out weeks into two first where we used the "old style" detailed tests and two latter where we used the new style. We found all problems during the latter two weeks, but in general, the software contractor had done a really great job with regards to their testing and the numbers of bugs we had to deal with were record low. 

Organization 2: Product testing with Reluctant Developers

The second example is from an organization I joined as their first and only test specialist. With their project manager's leadership, they had figured out writing test cases into word documents, one for each major area of the product. Tracking that the test cases were completed was central to the way they tested amongst the group of developers. Automation, on unit or system level, was not a thing yet for them. 

As I joined, the project manager wanted me to start creating test case documents like they had, improving them, and had ideas of how many test cases they would expect me to complete every day. 

Sampling one of the existing test specifications, it had 39 pages, 46 test cases, and 3 pieces of relevant information I could not figure out without reading the text based on commonly available knowledge. 

I made a deal with the project manager to write down structured notes while I tested, and we got to a place where I was trusted with testing, reluctant developers were trusted to test with me, and the test cases went away. Instead we used checklists of features to remind us what could be checked to design tests in the moment with regards to what the changes to the system were. 

Organization 3: Product testing with certification requirements

The third example is from an organization with a history of writing test cases that are traced back to requirements. Test cases are stepwise instructions. 

The change I introduced was to have two kinds of test cases: [Scenario] and [Feature]. Scenarios are what we use to test with and leave a lot of room for what exactly needs to be verified. Same test could be a week or an hour. For Scenarios, the steps are features as checklist - what features are part of that user journey. When we feel we need a reminder of how-to see a basic, sunny day scenario of a feature to remember what testing starts from, that is where Feature tests come in. The guideline was to write down only what wasn't obvious and keep instructions concise. There can be a feature test, without any steps at all. Steps are optional. 

Clearly, the test cases don't describe all the testing that takes place. But they describe seeing that what we promised that would be there, is there, and help us remember and pass on the information of how to see a feature in action. 

The Problems Test Cases Hide

Test cases can lead people into thinking that when they've done what they designed to do - the test cases - they are done testing. But testing does not work that way. The ways software can fail are versatile and surprising. And we care about results - information and bugs - over the documentation. 

Too much openness does not suite most of us. But too much prescription suites us even worse. And if prescription is something we want to invest in, automation is a great way of documenting in a prescribed manner. 


Saturday, February 13, 2021

Faking testing and getting caught

A lot of my work is about making sense into the testing we do, and figuring out the quality of testing. The management challenge with testing is that after they come to terms in investing in it, it isn't obvious if the investment is worth it. Surely, the teams do testing. But do they do it so that it provides the results we expect? Faking testing isn't particularly hard, and the worst part is that a lot of the existing processes encourage faking testing over testing that provides results. With an important launch, or an important group of customers we're building a business with, we don't want to be surprised in scale or type of issues we have never even discussed before. 

I find myself in hunt of two aspects of quality in testing: effectiveness and efficiency. 

Effectiveness is the idea that no matter what the testing we do is, does it give us the layers we need to not be caught red-handed with our customers? To be effective at testing, we have two main approaches:

  • Build well - bad testing won't make quality worse
  • Test well -  when issues emerge, finding them would be good
A good way to be effective at testing is to bring in more eyes at the problem. Don't ask only the developer-developer pair to test, add a third person you believe will take testing to heart and prioritize it. But why settle for three people if you could have ten, or 10 000. Grow the group. From unit testing to system testing, to end-to-end testing of systems, to pilot customers to beta customers, to real customers in production with continuous releases. That will tell you of effectiveness of earlier stages, and make you effective overall. 

This effectiveness consideration is the most important work I do, and sometimes - I would even say often, I catch people in this activity on faking testing. Most often people don't even realize they are faking testing, and they approach testing with wrong skills and focus, not providing the result that would be reasonable to expect for investing into that layer. 

Faking testing, no matter how unintentional, looks like we do a lot and we do, but we get nothing out. And we notice only when there is a need of testing well, because building well did not alone give us sufficient results. The later stages, hopefully designed into the overall testing in times close to the faked testing rather than discovering quality of testing at the delivery, reveal bugs in scales and types that we would reasonably expect to be found if we did the stages well. 

While effectiveness is the key, after effectiveness is in place I look at efficiency. There is only so much we should have to invest in testing on various layers, testing should increase our productivity and not slow us down, and we might be investing in a box of testing thinking it cover more boxes than it should for the return the investment gives us. 

These puzzles often fill my days, and I try to help people learn exploratory testing to get away from faking testing. They don't want to fake it, they just do what they think they should.


Friday, February 12, 2021

In Search of a Test Automation Strategy

The world of software as I know it has changed for me. I no longer join projects in preparation of a testing phase that happens in the end, but I am around from when I'm around until I am no longer around, building testing that survives when I am gone.

Back in the day of testing phase at the end of a project, test strategy used to be the ideas you prepared in order to work through the challenging phase. It gave the tests you would do a framing, guiding design. It usually ended up being written down in a test plan under a heading of approach, and it was one of the most difficult things to write in a way that was specific to what went down in that particular project.

With agile, iterations and testing turning continuous, figuring out test strategy did not get easier. But the ideas guiding test design turned into something that was around for longer, and in use longer. I talked about what ideas stuck with me at DEWT5 in 2015, and same ideas guide my testing to this day. 


Since then, I'm working even more on the strategy we share and visualizing it to nudge it forward. Seeing the strategy in action in a new team can be dug out of the team, asking the team to visualize their testing activities. 
The strategy I set does not matter, if it does not turn to action with the team. We now move versatile groups of people across different roles and interests. 

This week gave me a chance to revisit my ways on a theme of test automation strategy. I have never written one. I have read many, and I would not write any of those. But it stopped me to think of the ideas that guide my test automation design right now. These are the ideas that I brainstormed:
  • Start with the end in mind
    • Release time with minimal eyes on system. Rely on TA (test automation) on the release decision. 
    • TA keeps track of what we know so that it remains known when we change things
  • Incremental, incomplete, learning
    • Work towards flow of TA value - small streams become a significant pool over time. Moving for better continuously matter, not starting well or perfect.
    • Something imperfect but executable is better than great ideas and aspirations. Refactor to reveal patterns.
  • Timing
    • Feedback nightly, feedback on each change. 
    • Maintain ability to run TA on every version supported for customers
  • Early agreement
    • Design automation visibility and control interfaces at epic kickoffs
  • Scope
    • For each epic (feature), add the positive case to TA. Target one. More is allowed but don't overstretch.
    • Unit and software integration tests cover cruft of functionality. TA is for system level scenarios including hardware (as it is embedded for us). 
    • Not only regression TA, also data, environments, reliability, security and performance in automation. 
    • Acceptance tests for interfacing teams monitor expected dependencies.
    • Save the data. Build on the data. But first learn to run it. 
  • People
    • Invest in skilled TA developers through learning and collaboration
    • Require developers to maintain automation for breaking changes
    • To facilitate GUI selectors, GUI devs create first test with keywords
    • Allow for a "domain testing expert" who only contributes in pull request reviews on TA
  • Practices
    • Suites and tags give two dimensions to select tests, use tags for readiness
    • Seek to identify reusable data sets and oracles
    • Reuse of keywords supported through reviews and refactoring time
I guess this is as close to a test automation strategy I'm about to get. 


Thursday, February 11, 2021

Requirements Traceability

I'm all for understanding the products we build on what they do, and why we think doing those things might be relevant. The language I choose on requirements is the language of discovery. Instead of us requiring something, we discover needs and constraints, and make agreements we visualize in structures and writings to create a common understanding between all of us in organizations and our customers. Instead of setting a requirement, I like to think of us setting a hypothesis of something being useful, and testing that. And since I like to approach it as uncertain, I can easily embrace the idea that we learn and it changes. I don't need the concept of managing change to requirements, it is already built in with incremental delivery.

Thus, there are very few things that annoy me to the brink of wanting to change jobs but this one is one of those: requirements traceability between the magical requirement and the test case

In organizations that belong to the school of requirements, this is the heartbeat that conflicts with my heartbeat set on releases. Releases include capabilities at that time, and understanding what capabilities we had, as well as idea of replacing those capabilities systematically with newer set of capabilities. Arrhythmia is irregularities in heart beat, and it serves as a description as how I feel in the conflict of these two worlds I need to learn to adapt together without giving up on what I have grown to see important. 

In search of congruency, I put significant effort in doing things I consider valuable. I have never considered it valuable to take a requirement written down in a tool, and create a test case written down in a tool and link those two together. I don't write test cases - I test directly against the requirements and creating this other document feels like wasted time. Also, I don't believe that the requirements we create are the best representation of the understanding that leads us in finding information I seek to find in testing, and often following someone else's structure takes away from my ability to contribute information that adjust that structure for the benefit of all stakeholders. 

So, requirements traceability does not only waste my time in creating material I consider useless but it  also makes creating the results I expect to create harder. Over my career, I have needed to set straight on a good path of testing that provides results many organizations that started off with a requirements-centric straightjacket creating testers I would recommend letting go. 

So I push through once more with what I will do:

  • Given a requirement, I will confirm it in an exploratory testing session but only document that with closing of the epic, at a point of time we first introduce it into a release in the making
  • I will work to include it in test automation, and keep a version of test automation around that matches that release while it is supported. I will not offer a link back to requirements from specific automation cases. 
  • When using a feature is hard to figure out, I will write feature-specific instructions to document what I have learned while testing
  • I will create whatever material supports continuous, growing testing, without linking it to requirements.
  • I will care not only for my own work now, but for the work that comes after I am long gone
I recognize we are seeking these benefits from the mechanism the industry standards push:

  • managing expectations on what the product does and does not do
  • enabling support when product is in maintenance by
    • recognizing defects (as being against specification) from adaptation requests
    • retesting connected requirements when changes are needed
  • ensuring we don't miss someone's perspective in delivery by making 'proof' of testing on all perspectives - connecting two ends of a process allows for being systematic
  • making testing accountable for monitoring delivery scope on empirical level
  • having an up-to-date description of configuration of each delivered version
  • replacing old product with new could be easier
  • tracking project against requirements to deliverables giving sense of control to project manager