Last year I experienced something I had not experienced for a while: a four month stabilisation period. A core of the work of testing-related transformations I had been doing with three different organizations was to bring down release timeframes, from these months long versions to less than an hour. Needless to say, I considered the four month stabilisation period a personal fail.
Just so that you don't think that you need to explain me that failing is ok, I am quite comfortable with failing. I like to think back to a phrase popularised by Bezos 'working on bigger failures right now' - a reminder that too safe means you won't find space to innovate. Failing is an opportunity for learning, and inevitable when experimenting in proportion to successes.
In a retrospecting session with the team, we inspected our ways and concluded that taking many steps away from a good known baseline with insufficient untimely testing, this is what you would get. This would best be fixed by making releases routine.
There is a fairly simple recipe to that:
- Start from a known good baseline
- Make changes that allow for the change you want for your users
- Test the changes in a timely fashion
- Release a new known good baseline
- Write release notes - 26 individual changes to message worth saying
- Create release checklist - while I know it by heart, others may find it useful to tick off what needs doing to say its done
- Select / design title level tests for test execution (evidence in addition to TA - test automation)
- Split epics to this release - other release so that epics reflect completed scope over aspirational scope. and can be closed for the release
- Document per epic acceptance criteria, esp. out of scope things - documentation is an output not input, but if I was testing, it was a daily output not something to catch up at release time
- Add Jira tasks into epics to match changes - this is totally unnecessary but I do that to keep a manager at bay, close them routinely since you already tested them at pull request stage
- Link title level tests to epics - again something normally done daily as testing progresses, but this time was left outside the daily routine
- Verify traceability matrix of epics ('requirements') to tests ('evidence') shows right status
- Execute any tests in test execution - optimally one we call release testing and would take 15 minutes on the staging environment
- Open Source license check - run license tool, compare to accepted OSS licenses and update licenses.txt to be compliant with attribution style licenses
- Lock release version - Select release commit hash and lock exact version with a pull request
- Review Artifactory Xray statistics for docker image licenses and vulnerabilities
- Review TA (test automation) statistics to see it's staying and growing
- Press Release-button in Jira so that issues get tagged - or work around reasons why you couldn't do just that
- Run promotion that makes the release and confirm the package
- Install to staging environment - this is something from 3 minute run a pipeline to 30 minutes do it like a customer does it
- Announce the release - letting others know is usually useful
- Change version for next release in configs
- Create release checklist
- Select / design title level tests
- Split epics
- Document per epic acceptance criteria
- Add Jira tasks into epics to match changes
- Link title level tests to epics
- Verify traceability matrix
- Execute any tests in test execution
- Review Artifactory Xray statistics
- Review TA (test automation) statistics
- Write release notes
- Execute ONE test in test execution
- Open Source license check
- Lock release version
- Press Release-button in Jira
- Run promotion that makes the release
- Install to staging environment
- Announce the release
- Change version for next release