I've spent a day deepening my personal understanding of end-to-end
scenarios and reliability of all the test automation we have around
here. I have not come to a conclusion about it yet, but started to
slowly frame my role as an exploratory tester as someone who tests
reliability of the test automation systems. And coming from my interests
on clean code & reuse, I seem to be also taking an active role
towards testing the sharing solutions on test automation make sense also
or more in the future.
As I was testing end to end with an exploratory approach, I was bound to find some issues. I'm in an easy situation now in the sense that I have an old version that "works" to compare against, kind of like back when I was doing localization testing. If the comparison version was broken in the same way, we just mostly did not need to flag the problems.
All the issues I found ended up in a mindmap while testing. There was a color coding. Problems with the new not confirmed with the old. Problems with the new, confirmed not to be with the old. Problems with the old, vanished from the new. Problems with both.
As the data was collected and was pretty convinced I knew enough for now, I stopped for a moment. Normally, this would be the moment when I, at latest, go to Jira and log some bugs. I had to fight the urge to do that.
I fight the urge, because I want to keep trying the fix-and-forget approach. Instead of taking these to Jira and moving on, I want to:
As I was testing end to end with an exploratory approach, I was bound to find some issues. I'm in an easy situation now in the sense that I have an old version that "works" to compare against, kind of like back when I was doing localization testing. If the comparison version was broken in the same way, we just mostly did not need to flag the problems.
All the issues I found ended up in a mindmap while testing. There was a color coding. Problems with the new not confirmed with the old. Problems with the new, confirmed not to be with the old. Problems with the old, vanished from the new. Problems with both.
As the data was collected and was pretty convinced I knew enough for now, I stopped for a moment. Normally, this would be the moment when I, at latest, go to Jira and log some bugs. I had to fight the urge to do that.
I fight the urge, because I want to keep trying the fix-and-forget approach. Instead of taking these to Jira and moving on, I want to:
- Find the test automation that isn't catching these (and pair up to make these caught)
- Find the developer who is contributing to these, to understand their work priorities on when my feedback on these (and others) would be most timely for not just randomly jumping around the product, but completing a feature / theme at a time
- If these are known issues, figure out a way to get and keep the main more release-ready