Last Friday, I watched a group of software craftsmen agree on 3 * 20 minutes of paired demonstration on a refactoring Kata "Gilded Rose", and then changing their mind after the first 20 minutes.
The first 20 minutes was a pretty awesome demonstration of Llewellyn Falco and Aki Salmi pairing in strong-style using ApprovalTests in Java. The first 15 minutes went into a cycle of adding tests using LegacyApprovals (that I knew from C# as CombinationApprovals) adding criteria to a one line of code based on what Emma code coverage tool was hinting might be missing. With every expected result, they just documented as ApprovalTests what current one was, over trying in any way to understand or describe it yourself.
The last 5 minutes they cleaned up some code, covered with 100 % unit test coverage.
The 5 minutes after their time-box the group used on extending to mutation testing, adding some more tests as PiTest-tool suggested some of the existing tests were weak.
Total: 1350 tests with one line of code, and expected results defined as "if it works in production now, let's just keep it that way".
On Saturday, I took part in a code retreat, and used ApprovalTests on some of my sessions. This left me thinking why I'm particularly fascinated with ApprovalTests.
Better do some more exploratory testing on the tool. Next up is understanding how well the claims of what different Approvers do is actually consistent over the implementation. And then I was thinking of finding ways of breaking it in the environment of use.
If you want to pair on this, ping me. Just some educational fun on someone's open source project.
The first 20 minutes was a pretty awesome demonstration of Llewellyn Falco and Aki Salmi pairing in strong-style using ApprovalTests in Java. The first 15 minutes went into a cycle of adding tests using LegacyApprovals (that I knew from C# as CombinationApprovals) adding criteria to a one line of code based on what Emma code coverage tool was hinting might be missing. With every expected result, they just documented as ApprovalTests what current one was, over trying in any way to understand or describe it yourself.
The last 5 minutes they cleaned up some code, covered with 100 % unit test coverage.
The 5 minutes after their time-box the group used on extending to mutation testing, adding some more tests as PiTest-tool suggested some of the existing tests were weak.
Total: 1350 tests with one line of code, and expected results defined as "if it works in production now, let's just keep it that way".
On Saturday, I took part in a code retreat, and used ApprovalTests on some of my sessions. This left me thinking why I'm particularly fascinated with ApprovalTests.
- The tests in the file format with explanatory padding make sense in the world I think in.
- The "recognition" part is what I feel I have special skills on anyway as an exploratory tester
- The idea of filtering and processing depending on what technology you're testing to keep focus on testing makes sense to me
- There's practical solutions to things that I've thought sometimes as too hard to test, like running combinations quickly or keeping tests that work against an external service fast (iExecutableQueries stuff, where you do slow stuff only on failure).
- The idea of doing special things on failure for granularity makes sense, and changing reporters when investigating reminds me again of exploratory testing.
- I like how this feels so much like exploratory testing on unit level.
Better do some more exploratory testing on the tool. Next up is understanding how well the claims of what different Approvers do is actually consistent over the implementation. And then I was thinking of finding ways of breaking it in the environment of use.
If you want to pair on this, ping me. Just some educational fun on someone's open source project.