I did a talk yesterday, which I think of being around the idea that in a world where we've found useful and valuable ways of including automation in testing in a relevant scope, what more is there to do on that theme. Surely we're not ready and future (and today) hold many challenges around spreading skills and knowledge and innovating around today's problems.
I keep thinking back to an after conference discussion with someone who I think first my idea of where we can be if we stop fighting against test automation's existence. They had loads and loads of automated unit, component and integration tests. They found their tests valuable and useful while not perfect. But I'm most intrigued with the reminder of how the problems we talk about change when we have automation that isn't just wasting our time and effort.
The problem we talked about was that it takes too long to run the tests. What to do?
I want to first take a moment to appreciate how different a problem this is from the idea of not knowing how to create useful test automation in the first place. Surely, it sounded like this organization had many things playing for them: a product that is an API for other developers to use; smart and caring developers and testers working together; lots of data to dig into with your questions about which tests have been useful.
So we talked about the problem. 30 minutes is a long time to wait. They had already parallelized their test execution. But there was just much of tests.
We talked about experiments to drop some of the tests. The thoughtful reading to remove overlaps taking a lot of effort. Tagging tests into different groups to be able to run subsets. Creating random subsets and dropping them from schedules to see the impact of dropping, like having different tests to run for each weekday so that all tests end up run only once a week.
We talked about how we don't really know what will fail in the future. How end-user core scenarios might be a good thing to keep in mind, but how those might stay in the mind of the developers changing code without being in the automation. And how there just does not seem to be one right answer.
I have some work to do to get to these new world problems. And I'm so happy to see that some amazing, smart people who also understand the value of exploration in the overall palette are there already. Maybe the next real step for these people is machine learning on the changes. I look forward to seeing to people taking attempts in that direction.
I keep thinking back to an after conference discussion with someone who I think first my idea of where we can be if we stop fighting against test automation's existence. They had loads and loads of automated unit, component and integration tests. They found their tests valuable and useful while not perfect. But I'm most intrigued with the reminder of how the problems we talk about change when we have automation that isn't just wasting our time and effort.
The problem we talked about was that it takes too long to run the tests. What to do?
I want to first take a moment to appreciate how different a problem this is from the idea of not knowing how to create useful test automation in the first place. Surely, it sounded like this organization had many things playing for them: a product that is an API for other developers to use; smart and caring developers and testers working together; lots of data to dig into with your questions about which tests have been useful.
So we talked about the problem. 30 minutes is a long time to wait. They had already parallelized their test execution. But there was just much of tests.
We talked about experiments to drop some of the tests. The thoughtful reading to remove overlaps taking a lot of effort. Tagging tests into different groups to be able to run subsets. Creating random subsets and dropping them from schedules to see the impact of dropping, like having different tests to run for each weekday so that all tests end up run only once a week.
We talked about how we don't really know what will fail in the future. How end-user core scenarios might be a good thing to keep in mind, but how those might stay in the mind of the developers changing code without being in the automation. And how there just does not seem to be one right answer.
I have some work to do to get to these new world problems. And I'm so happy to see that some amazing, smart people who also understand the value of exploration in the overall palette are there already. Maybe the next real step for these people is machine learning on the changes. I look forward to seeing to people taking attempts in that direction.