It all starts with a manifesto in 2002. Except that it was called Principles, not a manifesto. And where Agile Manifesto was an outcome of a meeting, this one is an outcome of a book writing process of Cem Kaner, James Bach and Bret Pettichord.
THE SEVEN PRINCIPLES OF CONTEXT-DRIVEN TESTING
- The value of any practice depends on its context.
- There are good practices in context, but there are no best practices.
- People, working together, are the most important part of any project’s context.
- Projects unfold over time in ways that are often not predictable.
- The product is a solution. If the problem isn’t solved, the product doesn’t work.
- Good software testing is a challenging intellectual process.
- Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.
I loved the book. I loved the internal disagreement of advice in the book, and its attempt to describe when you might choose one way over the other. I became a context-driven tester and have been one ever since. Being a context-driven tester has helped me move between waterfall and agile project, between financial systems and web pages development and always make sense of what would be appropriate there over what was appropriate in my previous work engagement.
The core principle for me has turned out to be number 3. People, working together, are the most imporant part of any project's context. People come with skills and habits. Skills grow and habits change, but not over night - it's a longer process. I can, however, do the best testing possible for the current time and current constraints (my choices) while I keep on working to change the world as we know it (the givens).
There is nothing anti-automation in context-driven testing. Automation extends our abilities in testing, and it is a part of most strategies to testing. Automation is done by people, maintained by people and serves the needs of people. Just like any other software product, automation in testing is a solution. If the problem isn't solved, the product doesn't work. And while there's great automation in testing out there, there's a lot of solutions that neither solve or really help with the problem.
There's always the idea of opportunity cost: time used on something could be used on something else. And as a context-driven tester, my interpretation of the principles has been that it is my responsibility to drive a balanced view of short-term and long-term gains with regards to what (and by whom) we could automate.
Working with agile projects, I've learned that the only thing that stays is change. My team learns, and changes. Learning changes my context. If my context changes, I change with it. I can always reflect back to the principles - am I still providing testing that is "a challenging intellectual process"?
The Context-Driven Testing blog includes a commentary that I take to heart:
Context-driven testers choose their testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. The essence of context-driven testing is project-appropriate application of skill and judgment. The Context-Driven School of testing places this approach to testing within a humanistic social and ethical framework.Specific situations over prescribed notions of testing. It does not stop me from experimenting with TDD (that my developer's just have not gotten the hang of, yet!) or BDD (that just did not work out for us at the time we tried it) or Mob programming (that helped us get closer to real teamwork) or even most of test automation (building the skill takes a while). While experimenting and trying to get better, we keep the release engine running. And release daily. Testing included - in a context-driven fashion, growing as the context enables something different.
Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices,” we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances.