I could have seen this post coming as I was writing my previous post - a longer explanation of why would I go about calling some automators good tester, as they show no understanding of what good manual (exploratory) testing actually does.
The core of it is that I believe there's two kinds of testing: testing as artifact creation (automation, documentation) and testing as performance (exploration). When you're good in one, you might not be good in both. But both are testing. The first focuses more on known knowns and known unknowns, whereas the latter focuses more on unknown unknowns and unknown knowns.
I believe you can't spend decades automating a process (with good results) without learning to understand it and it's problems and solutions. While your understanding might not be perfect or complete, it can still be good. And looking at exploratory testers not actively going into the realm of automation (pairing with programmers qualify - friends with pickup trucks is a valid approach), that understanding is not perfect or complete either.
When test automators give the impression of believing in 100% automation, they often compare only things within testing as artifact creation. You learn stuff around creating those artifacts, and you include them. The manual testing they compare to is the manual testing exploratory testers loathe - the test case driven dumbed down commodity testing that drives down skill.
I've come to think I understand this a little better these days through two friends.
Pekka Klärck is a good friend of mine, and the creator of the Robot Framework. We've been members of the same local community long time, and in Finland we're lucky to get along even if we disagree heavily. Where I've been an exploratory tester most of my career, he's been a test automator most of his career. We can easily get into the arguments around manual / automation, and respectfully agree to disagree. But with all of that, he has taught me to respect test automators as a different specialty, who are not any less of tester than likes of me (exploratory testers).
Llewellyn Falco is another friend, and the creator of ApprovalTests. Working side by side with him, I've come to realize that there's generalist developers who are really good at testing, and who become better when exposed to good exploratory testing. I've also learned that the testers he's met over the years are nothing like what I perceive a skilled tester to be.
Hanging out with people different than you can be exhausting and frustrating. They might walk over you, fail to hear you and you will need to try again.
So we have four types of testers (at least):
- commodity testers
- exploratory testers
- test automators
- programmers becoming good at testing
- (test managers / architects - working in scale)
When we argue about what a tester must know and learn next, we should look at the balance. In organizations with loads of programmers becoming good at testing, you could use an exploratory tester. The ratio is more like 1:10 or 1:100 than 1:1. In organizations where programmers are bad at testing, even commodity testers provide value, but a mix of test automators and again a bit of exploratory tester skillset could be better.
Not everyone can hire the guru. Most organizations need to have homegrown gurus. The good homegrown gurus look around and learn a little more each day - regardless of what corner of testing / programming they end up starting with.
I repeat this a lot but it's worth mentioning again. If this industry doubles in size every five years as uncle Bob has mentioned, half of the industry has less than five years of experience. Let's spread out the learning so that all relevant corners end up covered. More time is more learning, and there's no reason for any of us to stay in an assigned box, instead we should move as our interests guide us.