I was testing around, with a particular focus to a change we had just recently included. Business as usual, we change things all the time and deliver to production whenever they're ready. I was looking at the main screen of our application when a nagging feeling hit me: something was off. The user interface had just been completely changed last week and I had already gone through changes related to that and things were ok as far as I could tell. Already before I could tell what was wrong, I had a gut feeling there was something. The gut feeling made me stop, made me take a thorough approach on what there should be an notice that a feature had gone missing.
To support my memory, I create checklists, not test cases. The checklists enable to to create a map or hierarchy of things and their connections, and recall vast amounts of information. It's like building a product specific heuristic that helps me recognize patterns. People are pattern recognition machines, and this plays well with the implicit knowledge that I've acquired as a tester in this application for the last three years.
This experience reminded me of something we did over dinner at Agile Open North West in USA a few months back. My dinner party was trying to list the 50 states and with some brilliant minds, there was the confidence that they can do it. And yet both who tried ended up getting stuck without a complete list, until they collaborated to trigger the others' memory.
This lead me to two realizations about how I test:
- Great memory is supported with right toolset - and a mind-numbing list of test cases isn't the toolset in my mind
- Great memory fails us every now and then, and when it's relevant that it doesn't, the "executable specification" is a lovely, lovely idea.