Saturday, May 28, 2022

Sample More

Testing is a sampling problem. And in sampling, that's where we make our significant mistakes.

The mistake of sampling on the developers computer leads to the infamous phrases like "works on my computer" and "we're not shipping your computer". 

The mistake of sampling just once leads to the experience where we realise it was working when we looked at it, even if it it clear it does not work as someone else is looking at it. And we go back to our sampling notes of exactly what combination we had, to understand if the problem was in the batch we were sampling, or if it was just that one sample does not make a good test approach.

This week I was sampling. I had a new report intended for flight preparations, including weather conditions snapshot in time. If a computer can do it once, it should be able to repeat it. But I had other plans for testing it.

I wrote a small script that logged in, captured the report every 10 seconds, targeting 10 000 versions of it. Part of my motivation for doing this was that I did not feel like looking at the user interface. But a bigger part was that I did not have the focus time, I was otherwise engaged pairing with a trainee on the first test automation project she is assigned on. 

It is easy to say in hindsight that the activity of turning up the sample size was worthwhile act of exploratory testing. 

I learned that regular sampling on user interface acts as keep alive mechanism for tokens that then don't expire like I expect them to.

I learned that for expecting a new report every minute, the variation of how samples every 10 seconds I could fit in varies a lot, and could explore that timing issue some more.

I learned that given enough opportunities to show change, when change does not happen, something is broken and I could just be unlucky in not noticing it with smaller sample size. 

I learned that sampling allows me to point out times and patterns of our system dying while doing its job. 

I learned that dead systems produce incorrect reports while I expect them to produce no reports. 

A single test - sampling many times - provided me more value than I had anticipated. It allowed testing to happen, unattended until I had time to again attend. It was not automated, I reviewed the logs for the results, tweaked my scripts for the next day to see different patterns, and do now better choices on the values I would like to leave behind for regression concerns. 

This is exploratory testing. Not manual. Not automated. Both. Be smart about the information you are looking for, now and later. Learning matters. 

Friday, May 6, 2022

Salesforce Testing - Components and APIs to Solutions of CRM

In a project I was working on, we used Salesforce as source of our login data. So I got the hang of the basics and access to both test (we called it QUAT - Quality User Acceptance Testing environment) and production. I learned QUAT got data from yet another system that had a test environment too (we called that one UAT - User Acceptance Testing) with a batch job run every hour, and that the two environments had different data replenish policies. 

In addition to coming to realise that I became one of the very few people who understood how to get test data in place across the three systems so that you could experience what users really experience, I learned to proactively design test data that wouldn't vanish every six months, and talk to people across two parts of organization that could not be any more different.

Salesforce, and business support systems like that, are not systems product development (R&D) teams maintain. They are IT systems. And even within the same company, those are essentially different frames for how testing ends up being organised. 

Stereotypically, the product development teams want to just use the services and thus treat them as black box - yet our users have no idea which of the systems in the chain cause trouble. The difference and the reluctance to own experiences across two such different things is a risk in terms of clearing up problems that will eventually happen. 

On the salesforce component acceptance testing that my team ended up being responsible for, we had very few tests in both test and production environments and a rule that if those fail, we just know to discuss it with the other team. 

On the salesforce feature acceptance testing that the other team ended up being responsible for, they tested, with checklist, the basic flows they had promised to support with every release, and dreamed of automation. 

On a couple of occasions, I picked up the business acceptance testing person and paired with her on some automation. Within few hours, she learned to create basic UI test cases, but since she did not run and maintain those continuously, the newly acquired skills grew into awareness, rather than change in what to fit in her days. The core business acceptance testing person is probably the most overworked person I have gotten to know, and anything most people would ask of her would go through strict prioritisation with her manager. I got a direct route with our mutually beneficial working relationship. 

Later, I worked together with the manager and the business acceptance testing person to create a job for someone specialising in test automation there. And when the test automation person was hired, I helped her and her managers make choices on the tooling, while remembering that it was their work and their choices, and their possible mistakes to live with. 

This paints a picture of a loosely coupled "team" with sparse resources in the company, and change work being done by external contractors. Business acceptance testing isn't testing in the same teams as devs work, but it is work supported by domain specialists with deep business understanding, and now, a single test automation person. 

They chose a test automation tool that I don't agree with, but then again, I am not using that tool. So today, I was again thinking back to the choice of this tool, and how testing in that area could be organized. As response to a probing tweet, I was linked to an article on Salesforce Developers Blog on UI Test Automation on Salesforce. What that article basically says is that they intentionally hide identifiers and use shadow DOM, and you'll need a people and tools that deal with that. Their recommendation is not on the tools, but on options of who to pay: tool vendor / integrator / internal.

I started drafting the way I understand the world of options here. 


For any functionality that is integrating with APIs, the OSS Setup 1 (Open Source Setup 1) is possible. It's REST APIs and a team doing the integration (the integrator) is probably finding value to their own work also if we ask them to spend time on this. It is really tempting for the test automation person in the business acceptance testing side to do this too, but it risks delayed feedback and is anyway an approximation that does not help the business acceptance testing person make sense of the business flows in their busy schedule and work that focuses on whole business processes. 

The article mentions two GUI open source tools, and I personally used (and taught the business acceptance testing person to use) a third one, namely Playwright. I colour-coded a conceptual difference of getting one more box from the tool over giving to build it yourself, but probably the skills profile you need to work so that you create the helper utilities or that you use someone else's helper utilities isn't that different, provided the open source tool community has plenty of online open material and examples. Locators are where the pain resides, as the platform itself isn't really making it easy - maintenance can be expected and choosing ones that work can be hard, sometimes also preventively hard. Also, this is full-on test automation programming work and an added challenge is that Salesforce automation work inside your company may be lonely work, and it may not be considered technically interesting for capable people. You can expect the test automation people to spend limited time on the area before longing for next challenge, and building for sustainability needs attention. 

The Commercial tool setup comes to play by having the locator problem outsourced to a specialist team that serves many customers at the same time - adding to the interest and making it a team's job over an individual's job. If only the commercial tool vendors did a little less misleading marketing, some of them might have me on their side. The "no code anyone can do it" isn't really the core here. It's someone attending to the changes and providing a service. On the other side, what comes out with this is then a fully bespoke APIs for driving the UI, and a closed community helping to figure that API out. The weeks and weeks of courses on how to use a vendors "AI approach" create a specialty capability profile that I generally don't vouch for. For the tester, it may be great to specialise in "Salesforce Test Tool No 1" for a while, but it also creates a lock in. The longer you stay in that, the harder it may be to get to do other things too. 

Summing up, how would I be making my choices in this space: 

  1. Open Source Community high adoption rate drivers to drive the UI as capability we grow. Ensure people we hire learn skills that benefit their career growth, not just what needs testing now.
  2. Teach your integrator. Don't hire your own test automation person if one is enough. Or if you hire one of your own, make them work in teams with the integrator to move feedback down the chain.
  3. Pay attention to bugs you find, and let past bugs drive your automation focus. 



 


Thursday, May 5, 2022

The Artefact of Exploratory Testing

Sometimes people say that all testing is exploratory testing. This puzzles me, because for sure I have been through, again and again, frame of testing in organisations that is very far from exploratory testing. It's all about test cases, manual or automated, prepared in advance, maintained while at it and left for posterity with hopes of reuse. 

Our industry just loves thinking in terms of artefacts - something we produce and leave behind - over focusing on the performance, the right work now for purposes of now and the future. For that purpose, I find myself now discussing, more often, an artefact of answer key to all bugs. I would hope we all want one, but if we had one in advance, we would just tick them all of into fixes and no testing was needed. One does not exist, but we can build one, and we do that by exploratory testing. By the time we are done, our answer key to all bugs is as ready as it will get. 

Keep in mind though that done is not when we release. Exploratory testing tasks in particular come with a tail - following through to various timeframes on what the results ended up being, keeping an attentive ear directed towards the user base and doing deep dives in the production logs to note patterns changing in ways that should add to that answer key to all the bugs. 

We can't do this work manually. We do it as combination of attended and unattended testing work, where creating capabilities of unattended requires us to attend to those capabilities, in addition to the systems we are building. 


As I was writing about all this in a post on LinkedIn, someone commented in a thoughtful way I found a lot of value in for myself. He told of incredible results and relevant change in the last year. The very same results through relevant change I have been experiencing, I would like to think. 

With the assignment of go find (some of) what others have missed we go and provide the results that make up the answer key to bugs. Sounds like something I can't wait to do more of!