Friday, October 29, 2021

Lessons Learned on Working with Data-Intensive Applications

In my years of teaching test design techniques, I have come to teach people that there are (at least) two essentially different types of functionalities we design tests for:

  • function-intensive applications are ones where you list the tricks the app can do, and a lot the work in designing tests are creating lists of functionalities and exploring when they work. 
  • data-intensive applications  are ones where the same functionality is riddled with data-oriented rules, and you are collecting business rules captured in data. 
This difference became clear to me as I switched jobs long time ago from antivirus software development (function-intensive) to pension insurance (data-intensive), and spent the next few years trying to wrap my head around the new challenges I had not paid attention to before. 

When data became the center of my testing universe, I learned that one of the major challenges we would be spending a significant chunk of our time on would be picking the "test data". If we needed a person who was just about to turn 63 (age of early pension), and we wanted to test today the scenario that they were under the limit and tomorrow the scenario that they were over the limit, we needed to find a precise set of data with those conditions.

And the data was not simple and straightforward row in a database. It was a connected set of databases, some owned by our company, some by some other companies, and for getting a production like experience in test environment, we had tools to choose someone and pull all related information into our systems. Similarly, we know we had an agreement that with 14 days notice, the other pension insurance field players would set their data on request to match what their production had. When I knew who pulls one data set and scrambles the data in the process and who pulls entire copies of their production databases and scrambles the piece we use to match the way we scramble, I could help our testing by the business experts flow a lot nicer. 

10 years passes, and I forgot it was difficult because for me it was routine. Until today when I had to explain it in my current place of work why it is that something so easy and obvious to me is so difficult and complex to many others. 

This is what we had today. An application hooked into its own database. A separate production and test environment. But connected business application data between production and test environments. 


Some applications make it preventively hard to use artificial data in test environments. When your business systems run tens or hundreds of hourly synch batch jobs, have tens or hundreds of users executing their day-to-day manual processing tasks on the user interfaces and the system clock that inevitably changes things because you have time-based logic, you will need to replenish the data from production. 

What we had was two different very simple replenish cycles. Once a month on an agreed date, one of the systems would get a refreshed copy from production. Once a year, another of the systems would get a refreshed copy from production.

I had designed last year our test data, independent of production in the other parts of the end to end test environment to be ok when data moves like this. The two systems synchronizing had the necessary data in production, and would be reintroduced with replenishing the data. 

Except it did not work. 

The application had bugs around not expecting the data to be replenished, but only on the logic that changes once a year. 

Someone else had not understood the rule of how to set up the data and had requested data that vanished in the replenish. 

I spent significant time teaching how to follow the data across the systems, and how the logic works between connected data sources and different environments. If I did not know this, I could not have tested the functionalities (data-intensive ones) last year when I did. 

What I learned though is that: 
  • documenting and knowledge sharing does not help if there is 10 months between being taught and needing the information
  • what I consider clear may be unclear to others
  • everything that can fail will fail, but at least it failed in test environment

Sunday, October 24, 2021

Talks Turned Articles

At my blog, I write whatever I feel like writing down. With blog posts, I accept that the posts are windows to my thoughts, representing whatever I had as context in the moment. People read some of it, some of them connect with me on the topics I am discussing. The temporal nature of blogging means it that I don't expect to write here sources that would be useful as articles and references, but different perspectives that I am processing towards those articles and references - and talks. 

I am committed to writing proper articles, and I have different collaboration platforms I share those articles on. By an article, I am thinking of something that should be useful even if you don't came to it at the time it was written, that somehow collates work in a more concise package. Articles should stand on their own, and teach you something that new people need to learn, again and again. 

Talks delivered, however, are something in between. The spoken format allows for - and requires - contents in a style and format we rarely write in. Talks are created to stay valid for a longer time, but the way they are presented is very temporal. A lot of times when I choose talks to conferences, I find myself using a phrase "not a talk, should be an article". 

Thus I find it interesting to experiment with something in between. I have started writing some of my talks after delivering them once. Video is available but who watches videos as replay when there is continuous stream of new content. I call this experiment #TalksTurnedArticles and you can find those on my dev.to profile

The latest in the collection is the talk I delivered today at TestFlix: Better Ideas at Test Design. Before that I published my favorite experience of transformations Practice Makes Better - 5x to Continuous Releases. And first in this series was Exploring Pipelines

These are articles in the sense that they are content I believe will stand the test of time. They are in a different place so that this place remains as low bar to write on my experiences, which turn to summarized talks and articles usually on cycle of some years. 

In addition to #TalksTurnedArticles, I chose dev.to as the place to host my full courses as text. The first one is available already: Exploratory Testing Foundations

You follow what I write here, but my best writing - at least to my standards - resides elsewhere. 

Tuesday, October 12, 2021

Three Stories Leading Into Exploratory Testing

End of September, I volunteered on a small lunch-time panel on exploratory testing at the conference. I sat down for a conversation and had no idea it would be such a significant one for my understanding. The panel was titled "To Explore or Follow the Map" and I entered the session with concerns on the framing. After all, I explore with a map and follow the map while exploring. 

Dorota, our session facilitator, opened the session inviting stories like one she was about to share on first experiences to exploratory testing. 

Paraphrasing Dorota from memory, she shared a story of how her first testing experience in the industry was on a military project where the project practice included requirements analysis and writing and executing test cases to do the best possible testing she could. One day the test leader invited all the testers to a half-a-day workshop where they would do something different. The advice was to forget the test cases and explore to find new information. And they did. The experience was eye-opening to all the things the thorough test case writing was making them miss. 

I listened to Dorota's recount and recognized she was talking of exactly the expectations I am trying to untangle in my current organization. Designing test cases creates a lovely requirement to test linking but misses all too much of the issues we would expect to find before the software reaches our customers. 

Next up was Adam, who shared a story of his first job at testing. His manager / tutor introduced him to the work expected from him giving him an excel with test cases, an a column in which to mark the pass/fail results. Paraphrasing his experience from memory, he shared that after he finished the list, the next step was to start over from the beginning. The enlightenment came with a conference where he met an exploratory testing advocate and realized there were options to this. 

My story was quite different. When I first started as a tester, I was given test cases, but also a budget of time to do whatever I wanted with the application that I would consider taught me to understand the application and its problems better. The test cases gave some kind of structure of talking about progress in regards to them, and I could also log my hours on whatever I was doing outside the test cases without very rigid boundaries between the activities. The time budget and expectations was set for testing activity as a whole, and I could expect a regular assessment of my results by the customer organization's more seasoned testers. The mechanism worked so that for a new person, first "QA of testing" was feedback, and latter ones had financial penalty if I was missing information they expected me to reasonably find with the mix of freedom and test cases to start with. 

While I was given space for better, I did not do better. No one supported me the way I nowadays aspire to support new joiners. Either I knew what I was doing or a minor penalty on invoicing was ahead, I would still be paid for all of my hours. I never knew anything but exploratory testing, and the stories of injecting it into organizations as Friday afternoon sessions or rebellious use of test cases to stretch from have always been a little foreign to me. 

What the three stories have in common is that exploratory testing is part of these pivotal moments that make us love the testing work and do well with results. My pivotal moment came from my second job where I was handed a specification, not test cases and I had to turn my brain on, and I've been on the path of extraordinary agency and learning since. 

Also, these stories illustrate how important the managers / tutors are on setting people up on a good path. Given requirements to test cases, you simplify the work to miss the results. Given test cases, you do work better left for computers. Given time without support you do what you can, but support is what turns your usefulness around.