Showing posts with label Approval Testing. Show all posts
Showing posts with label Approval Testing. Show all posts

Friday, September 2, 2016

The Integration ApprovalTests

With my last day at Granlund, I want to share one of the proud moments we've had here with  my team with our test automation efforts.

Unit testing was never easy for us. With all the people in agile conferences talking about unit testing as if it is the most natural thing any developer could do, I found that either we're worse than everyone else (I doubt it) or the conference speaker circuit isn't representative. Knowing that many (even most) organizations struggle with unit testing helped.

We did try. I helped developers clear up time from schedules. I found us teachers to do workshops with us. And the project manager would follow some of the numbers very proudly, even reporting them as part of his monthly steering group reports (that never really made sense to me).

I asked about qualitative stuff: how did the developers feel creating and using them, was there examples from this week they could mention where a unit tests helped them and while there were some, it wasn't a very good experience. We addressed the feelings is not knowing what is worth testing, not being able to test things where architecture wasn't built with tests in mind and many more. Over a course of two years, we still kept hitting the same wall: perceived lack of value.

Seeing some agile developers test (TDD), rely and love their tests, it was clear that we were missing something. But TDD did not turn out to be something that could be easily introduced regardless of firm belief (on my part) of its value.

So we added first database checks and created a lot of value through data monitoring. I've written about this before. With architectural change to our printouts (the main thing our end users want out of the system we're producing), there was a chance of driving for API that would enable automated testing of some of the more complicated stuff to test manually.

We separated pushing the data into an excel with formatting functionalities into a layer of its own, and all the functionalities related to getting the right data right was separated. While conceptually the unit testing of all this was still too hard, the idea of testing against the API wasn't. And we used ApprovalTests to make it even more easy.

We created the scenario to test into the database manually using the application, and added an ApprovalTest for each scenario to keep track of getting the right contents back.

With ApprovalTest, the contents got automatically saved onto an .approved -file and formed a golden master - alert us if this changes.


We created 20 scenarios and started using them as part of our continuous integration. 


Over the course of a year we've had these, they have failed for us for reasons. They've saved us many times from side effects none could foresee.

We also have some unit tests with ApprovalTests, in particular with combinations approvals. It might take a moment to understand that your test is the code + the file, but when you do, this makes a lot of sense. And most importantly: it made practical sense for us in a situation where we were not ready for all the great and fancy ideas related to TDD.

Tuesday, August 23, 2016

Circular discussion pattern with ApprovalTests

At Agile 2016 Monday evening, some people from the Testing track got together for a dinner. Discussions lead to ApprovalTests with Llewellyn Falco, and an hour later people were starting to get a grasp of what it is. Even though I Golden Master could be quite a common concept.

Just few weeks earlier, I was showing ApprovalTests to a local friend and he felt very confused with the whole concept.

Confusion happens a lot. For me it was helpful to understand, over longer period of time that:
  • The "right" level of comparison could be Asserts (hand-crafted checks) vs. Approvals (pushing results to file & recognizing / reviewing for correctness before approving as checks). 
  • You can make a golden master of just about anything you can represent in a file, not just text. 
  • The custom asserts are packaged clean-up extensions for types of objects that make verifying that type of object even more straightforward. 
Last week, I watched my European Testing Conference co-organizers Aki Salmi and Llewellyn Falco work on the conference website. There was contents I wanted to add that the platform did not support without a significant restructuring effort. The site is nothing fancy, just Jekyll + markup files built into HTML. It has just a few pages.

As they paired, the first thing they added was ApprovalTests for the current pages to keep them under control while restructuring. For the upcoming couple of hours, I just listened in to them stumbling on various types of unexpected problems that the tests caught, and moving fast to fix things and adjust whatever they were changing. I felt I was listening to the magic of "proper unit tests" that I so rarely get to see as part of my work.

Aki tweeted after the session: 
If you go see the tweet I quoted, an exemplary confusion happens as a result of it.
  1. Someone states ApprovalTests are somehow special / good idea.
  2. Someone else asks why they are different from normal tests
  3. An example is given of how they are different
  4. The example is dismissed as something you wouldn't want to test anyway
I don't mean to pick on the person in this particular discussion, as what he says is something that happens again and again. It seems that it takes time for the conceptual differences of ApprovalTests in unit testing to sink in to see the potential.

I look at these discussions more on the positives of what happens to the programming work when these are around, and I see it again and again. In hands of Llewellyn Falco and anyone who pairs with him, ApprovalTests are magical. Finding a way of expressing that magic is a wonderful puzzle that often directs my thinking around testing ApprovalTests. 

Wednesday, June 22, 2016

Fascinated with ApprovalTests

Last Friday, I watched a group of software craftsmen agree on 3 * 20 minutes of paired demonstration on a refactoring Kata "Gilded Rose", and then changing their mind after the first 20 minutes.

The first 20 minutes was a pretty awesome demonstration of Llewellyn Falco and Aki Salmi pairing in strong-style using ApprovalTests in Java. The first 15 minutes went into a cycle of adding tests using LegacyApprovals (that I knew from C# as CombinationApprovals) adding criteria to a one line of code based on what Emma code coverage tool was hinting might be missing. With every expected result, they just documented as ApprovalTests what current one was, over trying in any way to understand or describe it yourself.

The last 5 minutes they cleaned up some code, covered with 100 % unit test coverage.

The 5 minutes after their time-box the group used on extending to mutation testing, adding some more tests as PiTest-tool suggested some of the existing tests were weak.

Total: 1350 tests with one line of code, and expected results defined as "if it works in production now, let's just keep it that way".

On Saturday, I took part in a code retreat, and used ApprovalTests on some of my sessions. This left me thinking why I'm particularly fascinated with ApprovalTests.
  1. The tests in the file format with explanatory padding make sense in the world I think in. 
  2. The "recognition" part is what I feel I have special skills on anyway as an exploratory tester
  3. The idea of filtering and processing depending on what technology you're testing to keep focus on testing makes sense to me
  4. There's practical solutions to things that I've thought sometimes as too hard to test, like running combinations quickly or keeping tests that work against an external service fast (iExecutableQueries stuff, where you do slow stuff only on failure).
  5. The idea of doing special things on failure for granularity makes sense, and changing reporters when investigating reminds me again of exploratory testing. 
  6. I like how this feels so much like exploratory testing on unit level. 
Knowing the developer who created this stuff isn't actually a negative either. But for me, that would be often more of a reason to find actively reasons not to like it.I don't endorse friend's stuff blindly.

Better do some more exploratory testing on the tool. Next up is understanding how well the claims of what different Approvers do is actually consistent over the implementation. And then I was thinking of finding ways of breaking it in the environment of use.

If you want to pair on this, ping me. Just some educational fun on someone's open source project. 


Monday, October 12, 2015

Unit testing is about Asserts

There's this weird state of mind that I'm in with Unit Testing. I read unit tests, I talk about unit tests but I rarely write any. But I've been around them enough to start to think I'm recognizing some patterns there, and know when stuff I'm being suggested is useful / good and not.

Last week, one of my teams had a Unit Testing -training. My motivations to participate were two-fold. I was really curious why they had set learning unit testing as their target (now that they no longer have a professional tester working with them, the rumor is that they might be struggling more with delivery quality). I also wanted to see how the new training company was doing on the topic.

Out of the three hours, we spent two with theory and one with hands-on work. The theory was the usual stuff: refactor to find functional bits you can test. Isolate the bits that make it hard for you and leave them out of your tests. We also looked at my team's tests, without a single good example. There was a lot of bad out there.

The hands on work focused on removing need of profiler in the tests. We had been heavy on mocking with JustMock, that made testing possible, but to an extent it made tests slow doing some things for us that weren't needed. So we were removing dependencies on Profiler.

While looking at the examples and the tests we were trying to change, my eyes kept going to the asserts in the tests. That's where the bit of testing with the tests happen. And I could not help but noticing how weak the asserts were. I had been primed for paying attention to that.

To start the same week, I had just listened to a talk by Llewellyn Falco titled "Common Path for Approval Testing - patterns for more more powerful asserts". Perhaps that is why I made the connection: all of the asserts I was seeing were the first step. We would assert numbers and boolean values for existence. Nothing more advanced. Nothing more meaningful. And asserting simple things is simple, but leaves a lot to hope for in the perspective of actually noticing that things break. And when they break, being able to analyze what is going on. The picture introducing step 6 (Diff tools), doing things on failure that can be slow but happen only on items that fail was an eye-opener to me.

With all of this, I was left to wonder. Having weak tests that run faster cannot be the goal we should be having. When there's many things to work on, how do teams really end up making their choices of what to start from? This choice, looking from a tester perspective, just makes little sense. If testing happens somewhere in the unit tests, the asserts seem like a place to pay attention to. Thus I'm very drawn to the idea of making them more powerful.