The team had been on a core practice of clarifying with tests for a while, and they invited an outsider to join their usual meeting routine.
Looking around, there were 8 people in a call. The one in charge of inviting the meeting shared their screen, for what was about to be their routine of test design sessions. He copied the user story they had been assigned to work on into the Jira ticket he had open, and called the group for ideas of tests.
People started mentioning things that could be tried, and the one in charge wrote things down as the rest of the group watched. For a high level idea, he called for the detailed steps. Every idea that came about was written down and detailed.
After the meeting, the group would split the work to automate it all.
And two weeks later, they had all these tests passing, and a feature that just did not work.
The magic glue they miss is what I call exploratory testing. The sense of caring for results in testing by focusing on learning and recognising that the time most people create test cases like above, it is the time when they know the least.
You can add exploratory testing on top of this.
You can lighten up the details you want to write to start with to leave more room for exploratory testing, where output of your testing is the documentation.
You can seek to optimize for learning, being aware of the time used.
The team that follows the planning pattern did not do exploratory testing. You could argue the next team using their results did exploratory testing through trying to use it, to tell them it is failing.
Working with teams like this is real. It is embarrassing but it is real. And we don't change it by playing with words, but by making the results we expect clear and permissive to excellence.
This story, unfortunately, was brought to you by the "tell no stories that did not happen" and "only stories from last two years allowed" rules. This is real. This is what people still make of testing, and some folks meekly follow the agreed practice.