Five years ago we hired a trainee who grew into the kind of tester we need. Polyglot with regards to programming, shifting actively between Python, PHP and TypeScript. Centers automation and ability to do great exploratory testing equally. Collaborative and making an impact far wider than her individual contributor work. She is a career changer, and I suspect I will always admire her drive of learning and combination that emerged from her past experiences and the new things the tech industry made available through work. We worked together about a year, and I have closely followed her growth since.
Yesterday, quoting Grey's Anatomy she told me: 'See one. Do one. Teach one.' As corny as a life lesson on learning from Grey's Anatomy is, it's a great one-liner to describe the path we shared and the expectation I work with, that she works with.
It's not enough that you do. You need to see: pair and ensemble testing are essential. Seeing is essential. But so is teaching. Reinforcing what you learned by reflecting. Hearing your experience through others learning.
So today, I refer to her as my prototype for the Reimagined Tester. We have talked about the ideas of contemporary exploratory testing and how the set of skills of a tester isn't either great skill at targeted feedback (testing) and maintaining what we know (test automation) but a great intermix of these two.
There are more people like us, the Reimagined Testers, the Contemporary Exploratory Testers. But we are a minority in the field of testing. And the more I ask people to test applications and watch them test while pairing, the more I recognize we need a major revamp that goes beyond lip service of saying what we should do is what we are actually doing in the projects.
The last five years, I have invested a significant chunk of my summer and essentially my free time in growing the Reimagined Testers and figuring out how I could scale that. Because one a year is not enough.
Choices of Growing 2025
Looking back at how this years choices emerged, I see three stages:
- Selection with a homework assignment
- Model an application and write test automation
- Find what others have missed
- Look, we only got 2 test cases out of three months of effort
- Look, we got so much learning and also 2 test cases out of three months of effort
- Make space for 'see one'. I handpicked courses that were good, but courses don't give you feedback when you miss the more subtle teachings. We ended up with more trial and error, and less results because delayed feedback is not the best platform for learning good foundational practices.
- If using courses to teach, structure schedule so that course gets completed. Sampling courses to start progress works great when you have solid foundation, but not when you need to build a foundation.
- Choose a better tool. Robot Framework was not a good choice. We would have gotten so much more if we used Playwright Typescript. The limited examples online. The hallucinations from GitHub Copilot. The technical limitations to some of the best parts of what is Playwright. They were my choices and they were wrong.
- A real team with full time people on same work would be better. But it is not always possible. We don't really have test automation teams, or test teams.
- Introducing a trackable to do -list for feedback on improvements and corrections. It helped making progress and getting the sense of how the work grew as it was being done.
- Check ins on progress. Not ones on calendar, but making space for collaboratively look at what was there and where was it heading.
- Introducing other helpers, even if some of the help was self-discovered discouraged patterns. Making it so that my availability was not a blocker for its variability.
- Fixing the codebase and discussing the fixes. While that introduced merge conflicts, we need to learn merge conflicts early on. And we did.
- Enforcing 'teach one'. Internal demos. Teaching twice to the internal community. Writing a commit analysis with help of AI to reflect on the outcomes and sharing that with everyone. Essentially, becoming a speaker while a trainee.
- Teach with exercises. I have them, plenty of them. And we would do better if I taught. Maybe. That is, I taught some bits on need basis while coaching for choices of focus, tasks and priorities. But teaching with exercises would most likely have been helpful. Because here it was even more evident: the course material I wrote and asked to study was never read beyond its start.
- Teach meta. Like the fact that I am at the same time manager, consultant and coach and have conflicting ideas with my roles. Clarifying and repeating agreements is an essential skill to teach and I learned this by failing with communication. It's always two people not getting each other.
- Radical candor. Some feedback I had to give was corrective of nature and it helped we established that I am telling things I see to help them grow. I did not enjoy giving some of the feedback but doing it made the growth.
- Tester to tester coaching. I spent two weeks myself on testing the same system for making a consultant recommendation on the future actions of the team. I learned their test automation and created some of my own. I can come across as knowledgeable now in the business domain, and project status. And I have spend hours hands on with the system. My guidance was not high level, but steps I had taken and would take next if I had time.
- The note taking emphasis. Being able to describe daily insights. Improving discussing results with coaching. While we agreed on leaving them public, they turned private as soon as I stepped out, but they existed. And they were fodder for genAI on generating test ideas.
- The automation insistence. Automation almost got dropped out even though it was essential to be able to complete the mission: find what others miss. Without insistence, severely limited ability to test through GUIs would have won over and it would not be right.