- Team size. How many people delivering the work I was analyzing.
- Monthly Active Devices (MAD). The first number I was curious on is how many customers the team was serving with the same people. Being a DevOps team meant that the team did development and deployment of new changes, but also provided support for a ever growing customer base we calculated in impressively large numbers. Telemetry was invaluable source for this information. It was not a number of money coming in. It was people using the product successfully.
- Work done represented in Jira tickets. I was trying hard to use Jira only as an inbox of work coming from elsewhere outside the immediate team, and for most part I succeeded with that and messed up all my changes of showing all work in Jira ticket numbers (I consider this a success!). About a third of visible ticket work done was maintenance, responding to real customer issues and queries. Two thirds were internally sourced.
- Work coordinated represented in Jira tickets. Other teams were much stricter in not accepting work from hallway conversations, and we often found ourselves in a role of caring for the work that others in the overall ecosystem should do. Funny enough, numbers showed that for every 2 tickets we had worked on ourselves, we had created 3 for other teams. The number showed our growing role in ensuring other teams understood what hopes were directed towards them. It was also fascinating to realize that 70% of the work we had identified for others was done within the same year, indicating that it wasn't just empty passing of ideas but a major driving force effort.
- Code changes. With the idea that for a DevOps team nothing changes if the code (including configurations) changes, I looked around for numbers of code going into the product. I counted how many people contributed to the codebases and noted it was growing, and I counted how many separate codebases there were and that that too was growing. I counted number of changes to product, and saw it double year over year. I noted that for 4 changes to the product, we had 3 changes to system level test automation. I noted code sharing had increased. Year over year numbers were delight: from 16% to 41% (people committing to over N components) and from 22% to 43% (more than M people committing on them) on the two perspectives of sharing I sampled. I checked my team was quarter of the people working on the product line, and yet we had contributed 44% of changes. I compared changes to Jira tickets to learn that for each Jira ticket, we had 6 changes in. Better use the time on changing code than managing Jira, I would say.
- Releases. I counted releases, and combinations included in releases. If I wanted to show a smaller number, I just counted how many times we completed the process: 9 - number that is published with the NEXTA article we wrote on our test automation experience.
- Features pending on next team. I counted that while we had 16 of them a year before, we had none with the new process of taking full benefit of all code being changeable - including that owned by other teams. Writing code over writing tickets for anything of priority to our customer segment.
- Features delivered. I reverse engineered out the features from the ticket and change numbers, and got to yet another (smaller) number.
- Daily tests run. I counted how many tests we had now running on a daily basis. Again information that is published - 200 000.
Tuesday, December 29, 2020
Thursday, December 17, 2020
There is a fascinating way of coming to the idea that the problem is almost always testing. Here's a little story of something that has happened to me many times in many organizations, and was recently inspired to think about. Maybe it is because it is almost Christmas. :)
Speaking in metaphors, the box with Christmas Ornaments inside.
Once upon a time, there was a product owner who ordered a Box with Christmas Ornaments. As product owners go, they diligently logged into Jira their Epic describing acceptance criteria clearly outlining what the Box with Christmas Ornaments would look like delivered.
The Developers and the Testers got busy with their respective work. Testers carefully reviewed the acceptance criteria that was co-created, and outlined their details of how testing would happen. Developers outlined the work they need to do, split the work to pieces, and brilliantly communicated to testers which pieces were made available at each time. Testers cared and pinged on progress, but when things aren't complete, they are not complete.
The test environment for the delivery was a large table. As pieces were ready from the Developers, their CI system delivered an updated version into the middle of a table. The Box with Ornaments was first a pile of cardboard, and everyone could see it was not there yet. But as work progressed, the cardboard turned into a Box, without the Ornaments. As per status, pieces were delivered (and tested), but clear parts of the overall delivery were still undone.
Asking the status and wanting to be positive, Developers would report on each piece completed, and the Box on the table looked like it was there. It was there quite some time. Asking status from testers on testing, they would learn that testing was incomplete, and it was so easy to forget that there are scenarios that required both the Box and the Ornaments to make sense of the final item, even if we could and had tested to learn about each individually.
The product owner, equipped with their Epic in Jira looking towards the table concluded:
Things get stuck in the process. They are long in an intermediate stage. It feels like they don't care about delivering me my package, they just leave it lying around for testing.
It's not like they ordered the Box without Ornaments. Yet they feel it looks ready enough that putting the Ornaments in is extra wait time.
To achieve flow of ready to the hands of whoever is expecting, optimizing developer time between multiple deliveries really does the negative trick. Yet we still, in so many cases, consider this to be a problem with testing.
I know how to fix it. Let's deliver it as soon as developer says so. No more Testers in the place you imagine them - between implementation and you having that feature at your hands.
A better fix is to deliver the empty box all the way to the customer as it is ready, and carefully think if the thing they really wanted was the Ornaments, and if another order of delivery would have made more sense.
Tuesday, December 8, 2020
In companies, I mostly see two patterns with regards to red in test automation radiators:
- Fear of Red. We do whatever we can, including being afraid of change to avoid red. Red does not belong here. Red means failure. Red means I did not test before my changes can be seen by others.
- Ignorance of Red. We analyze red, and let it hang around a little too long, without sufficient care on the idea that one known red hides an unknown red.
Tuesday, November 24, 2020
Working on a sizable system, feature after feature I found us struggling with timeliness of completing testing. Each feature kickoff, testerkind showed up to listen, and start learning. While development was ongoing, more learning through exploring took place. And in the end, after feature was completed, tests were documented in test automation, and just in case time taken for some more final exploring.
It seemed like a solid recipe, but what I aspired for is finding a way where testing could walk more in sync with the development. The learning time was either half-attention, or elsewhere occupied, and it resembled a mini-waterfall for each feature.
I formulated an experiment, with the intent of learning if being active with test automation implementation from the start would enable us to finish together. And so far, it is looking much better than before.
The way I approached test automation implementation was something I had not done before. It felt like a thing to try, to move focus from testerkind listening and learning to actively contributing and designing. In preparation for the feature kickoff, I draw an image of points of control and visibility, tailored to the specific feature.
- ssh to a command line in a remote computer is a touch point of both control and visibility
- reading a file in filesystem is a touchpoint of visibility
- reading a system log is a touch point of visibility
- calling a REST API and verifying responses is a touch point of both visibility and control
- clicking a button on the UI and entering values is a touch point of control
Having agreed the touch points, we could do test automation a touch point at a time. Our understanding of what touch points that were not about to change we could already work on, and what touch points were depending on the changes. We could talk about how order of development enabled testing. And most importantly, we could talk about which touch points I identified did not make sense for the system scale, because we could address risks on unit and component tests.
The current popular thinking in the testing community is to paraphrase testability into something wider than visibility and control. This little exercise reminded me that I can drive a better collaboration test automation first, without losing any of exploratory testing aspects - quite contrary. This seemed to turn the "exploring to learn randomly while waiting" into a little more purposeful activity, where learning was still taking place.
In a few weeks, I will know if the original aspiration of starting together actively to finish together will see a positive indication all the way to the end. But it looks good enough to justify sharing what I tried.
Monday, November 23, 2020
At work when I find a bug, I'm lucky. I follow through during my work hours, help with fixing it, address side effects, all the works. That's work.
When I'm off work, I use software. I guess it is hard not to these days. And a lot of the software recently makes my life miserable in the sense that I'm already busy doing interesting things in life, and yet it has the audacity of blocking me from my good intentions of minding my business.
Last Friday, I was enjoying the afterglow of of winning a major award, bringing people to my profile and my book, only to learn that my books had vanished from LeanPub. One the very day I was more likely to reach new audiences, LeanPub had taken them down!
After a full excruciating day of thinking what was it that I did wrong to have my account suspended for authorship, LeanPub in Canada woke up to tell me that I had, unfortunately, run into a "rare bug". Next day I had my books back, and a more detailed explanation of the conditions of the bug.
If I felt like wasting more of my time, I guess I could go about trying to make a case for financial losses.
- It took time of my busy day to figure out how to report the bug (not making it easy...)
- It caused significant emotional distress with the history of one book taken down in a dispute the claimant was not willing to take to court
- It most likely resulted in lost sales
- Someone claimed I was "testing in production" because through my profession, I couldn't be a user.
- Someone claimed I was "testing without consent" because I wasn't part of a bug bounty in finding this
- Someone claimed that I was breaking the law using the software with a vulnerability hitting the bug
- Someone claimed I was blackmailing Foodora on the bug I had already reported, for free, expressing to them I was not doing this for money in our communication
- Someone claimed I was criminally getting financial benefit of the bug
- Someone claimed the company could sue me, their user, for libel in telling they had a bug
- Someone claimed I was upset they did not pay me more, not on the fact they didn't pay a competent tester in the first place (I know how to get to the bug, I would have found it working for them - exploratory testers couldn't avoid it, automation only strategy or test cases could)
- Someone claimed I was eating on the company's expense, when I was reporting on 200 euros of losses (for food they had thrown away)
A colleague was working on a new type of application, one that did not exist before. The team scratched together a pretty but quick prototype, a true MVP (minimum viable product) and started testing that on real users.
Every user they gave the app to, gave the same feedback on a detail they did not like after first experience of use.
Fixing the thing would take a while, but since the feedback was so unanimous, the team got to work. A week later, they delivered a new version.
Every user that had the app before, hated that there was a change, they had grown to like the way it was.
Every user they added for first time experience, hated how it was different from the usual way things are done. For the first day, until they learn.
I shared this story to my team at work this morning, as my team was wondering if we should immediately put back the "in review" state to our work board in Jira. The developers were so used to calling "in review" their done, but also refusing to call it their done and moving it to the done column. They used to leave tickets to this in review column, and no one other than themselves was finding any value on it.
We agreed to give the change two weeks before going back to reflect on it with the idea that we might want to put it back.
Users of things will feel differently when they are still adjusting to change, and when they have adjusted to change. Your needs of designing things may be to ease getting started, or ease the continued use. Designing is complicated. Pay attention.
Sunday, November 22, 2020
I worked with a product owner, who liked seeing all of us in the team put down very clear tasks we are doing, with estimates of how long does it take to complete it. Testing never fit that idea too well.
Some tasks we do, until we are done with them.
Some tasks we do, until we've used time we're willing to invest.
Some tasks, through learning become very different than what they originally were and writing it down may get you doing the wrong (planned) task, instead of the right (emergent) task.
Test cases and manager saying things like "I expect to see 20 test cases completed every work day", or "add one case to automation every day" give testing the feel that instead of learning and exploring, we're squeezed on time.
Thinking to the explorers of territories, finding something new and interesting was probably not likely if the orders were to take the main road, stick to it and make it to destination in an optimized schedule.
The one thing that turns your testing to exploratory testing is time. Take time, and think about using time time in a smart way. You won't know all things in advance. But without time, you wouldn't be able to be open to learn.
This insight comes from last Friday and me trying to get to a conference stage. I left on time, and turns on the way lead me to a completely different talk than I intended to give - the one I intended to give would not have worked out for that stage.
Things that can only happen when isolating...— Maaret Pyhäjärvi (@maaretp) November 20, 2020
I was about to leave for downtown to record a talk for #MimmitKoodaa. As I stepped out the door, I realized we have snow. I had no winter tires. Wait, it gets better (1/n).
Friday, November 20, 2020
Friday, November 13, 2020
If all the world is open to you, why do things like they were always done? Throughout my career, I have held one heuristic above all others for exploratory testing:
Never be bored.
Obviously, I have been bored. But I have never accepted it as a given, or a thing I cannot change with internal locus of control. I can't change the world around me (always), but I can change how I look at the world. And I choose to look at it with creativity and playfulness, and being just enough of a rebel that things can be different.
With enough rehearsing on ideation on both how I would test and how I would frame the system I test in (not software system, the system of people in organizations), I've collected ideas that I can combine into new. I find myself being one of those people who show up to a retrospective with ideas of what we could try, and with increasing comfortability in the idea that to learn the most, we should have half of our ideas be too far off so that they should fail, and other half take us further through successes.
I'm quite happy with some of the experiments that stuck with us. The "no product owner". The "vanishing backlogs". The "start from automation". The "zero bug policy". The "wait for pull". The "stop reporting bugs". The "rotating responsibility". The "power of network". I could go on listing things personal, team and organization, and notice how much more I enjoy telling the stories of successes. Let's try X could become a personal mantra.
This week, I decided to experiment on how I share my experiments. I'd love to see people propose ideas from left and right, but while that isn't happening, I can model that through what I do. I wanted a visual describing what change goes on with me. The visual - a improvement experiment canvas - is a tool for canvassing - seeking support, asking people for opinions on things that are easily invisible.
I ended up creating this:
The first two experiments I described this week. One of them is duration of a few weeks, from start to finish of a feature. Another is one month, to see if a major change in how we track our work feels good or not. Defer commitment. Defer scale. First learn. And learn continuously.
Wednesday, November 11, 2020
Last few weeks, my twitter feed has seen increasing amount of people recommending a series on Netflix called Queen's Gambit. The series is awesome, highly recommended, and focuses on two things: substance abuse and chess. My inspiration for testing comes from the latter.
Watching the series and doing a bit of research, I learned that queen's gambit is a chess opening: one of the first moves a chess player starting with white can open with. queen's gambit is popular, has attacking prowess, and puts pressure on opponent to defend correctly. It offers a pawn as sacrifice for control of the center of the board, and the sacrifice is made in search of a more advantageous position.
Gambit - a game opening - is a word I was blissfully unaware of until I watched the series last weekend.
You may already guess, my interest in chess is superficial. My interest lies with exploratory testing. With the conference season at it's peak, my twitter timeline also displays anti-automation takes from prominent people in testing. Talking about a dichotomy of exploratory testing and automation is anti-automation. Because really, the only reason these two are sometimes considered separate is (lack of) skill. Skill in testing (for the programmers) and skill in programming (for the non-programmers). Skill limits what openings for the collaborative game of testing in software development you have available.
So I call for a new opening move for exploratory testing: the automationist's gambit. Learn every day about both testing and programming, and treat them as mutually supportive activities embedded in the same, growing individual no matter how many testing years they have under their belt.
You can't automate well without exploring. While you create that code, you will look at what goes on around you and acquire understanding, and report things you consider wrong.
You can't explore well without automating. If you don't have automation to document with, to extend your reach with and then throw away to avoid maintaining it, your reach is limited.
Failing test automation in an invitation to explore. Exploring provides you ideas of what to document in your automation, or where to reach that you could not by hand. Automation serves as a magnifying glass for the details you need to check to know what works and what only appears to work. Automation serves as the web of the spider to call you in to see what it caught this time.
Exploring without programming is common but it is not the only way. So I offer you consideration of learning the automationist's gambit. Test automation makes you a stronger explorer. And this option is available for those who are just starting their careers just as much as it is available for seasoned professionals. Try using the anti-automation energy to learning the skill you are missing and see if useful automation has an increased chance for emerging. It has for me.
Automationist's gambit is like queen's gambit. It's not about becoming an automator. It is about enabling the automator by sacrificing something that is on the way: your need to warn people that automation isn't all encompassing. Trust me, the managers already know and you are not helping them or yourselves presenting as anti-automation person.
Doing is one thing. Doing together is another. I'm all for collaboration skill that enables us in performing the automationist's gambit as a pair that is stronger than an individual could be.
Friday, October 30, 2020
25 years ago, I became a tester by accident. Looking back, falling into it was like falling in love, and learning about the profession has only grown the bond. I didn't intend to be a tester.
Localization testing with test cases
I was hired out of first years in university to do testing on the side at a localization agency. Localization testing is a special brand of testing where you can always define correctness from the combination of functionality of the original language reference implementation and basic understanding of the target language. My target language to begin with was Greek, and my level of understanding it was most definitely basic.
We were given step-by-step test cases to execute, each step including the always default comparison to the original language reference you would run on a second computer, identical to the one you were using for the target language. On top of those test cases, we were told we can invoice a fixed number of hours on something they called *ad hoc testing*. They instructed it came in two styles: freeform ad hoc testing and directed ad hoc testing. In the first, we had no guidance to our testing. And in latter, we would first choose a goal, an area we were looking at, and stick to our commitment of a goal for a period of time. We were asked to report our bugs attributing their source to the style we were using, and ended up with measurements on portions of found in test cases, freeform ad hoc testing and directed ad hoc testing.
It was only years later, reading Cem Kaner's foundational book *Testing Computer Software* that I learned that the two styles of ad hoc testing were referred to as exploratory testing, and started growing a deeper understanding of the style.
For the first projects that I completed, I loved test cases. I was somewhat clueless on all things the software could and should do. The walkthrough instruction of what to click and what to verify, only extended by detailed comparison of the same functionalities on the two computers, made me feel like I was doing the right thing. I was ticking off test cases.
I remember one early indication that something was wrong and missing with the approach though. Our customer had a step in their process where they did something they called quality assurance, selecting some of our test cases and using them for directed ad hoc testing. Contractually this quality assurance step was bound with the payment and provided a possible percentual cutter had you missed reporting bugs. And me and my other out of the streets new tester colleagues were often hitting the cutter, not doing as good a job as those doing the quality assurance on our work.
Functional Testing and Test Design
I changed jobs, and got my first testing job outside localization testing. Now all of a sudden there was no original language reference, but there was a pile of documentation. And there were no ready test cases, but a request to create your own.
The product company I worked with cared less for me producing test cases, and more for me producing great bug reports. Whatever specifications I was given, I would go try out the claims with the software, and as I was designing tests with the older version of the software, I was already reporting issues. Test cases I wrote became more of an output of me learning the application to support me doing the things again next time, and understanding in detail where the change planned for the project would happen.
The experience with test design responsibility was like an awakening. Given responsibility to think through what I would test over following someone elses guidelines, I started finding my feet in detective work I would come to know as exploratory testing. My focus moved from following steps to creating my own steps, and leaving something useful behind. I still left behind test cases, as that was all I knew. And many, many bug reports that would result in fixes.
I remember this period as the time when I realized that you can be respected as a tester by being a great tester, and setting my mind on that path. I would learn and become the best tester I could be.
In 1999, my university started offering the first ever course on Software Testing. They found an industry teacher, and as someone with some years in the profession on the side of school, I was one of the first to enroll.
The course was my first connection with the world of testing outside doing what I was told and supported in doing by the companies that I worked for.
I thought I knew more than I did, and while I passed the course, I did not do great. The course was exercise heavy, and we moved from writing a test plan to writing test cases and to writing test automation with tools of that time. But I worked with testing, I was enthusiastic about testing, and a year later I was given a chance of redoing the course I had passed with another teacher's course assistant.
The second year of the course with a new teacher, and the course was similar in idea, but very different in my memories. We again went through the project work from test plan to test cases to execution and automation. Following the lecturer's grading guidelines for the exercises solidified my theoretical and academic understanding that test case design was important. It also enabled me to stay fairly oblivious to how I had done my test cases in the functional testing work I was doing, as a result of exploratory testing.
Becoming the Teacher and a Researcher
Having been a teacher's assistant, I was asked to become the teacher. It fit perfectly the professional growth needs I had identified as someone who was petrified at the idea of having to do a presentation. I needed to rehearse, regularly, and there's no better place for it than showing up to teach something I thought I know.
The other side of the teaching job was working as a testing researcher. I read anything and everything on testing I could get my hands on. The work bought me books, and I devoured them. Having to teach, I would teach what I knew so I taught importance of testing, test planning and test cases.
As a researcher, I got to look at companies that I didn't work at, that had other testers. Our research focus was on lightweight approaches for small product companies, and the *Testing Computer Software* book describing ways of testing in Silicon Valley and using exploratory testing became my go to sources.
I remember this era as the time I tried so hard to learn to talk about testing. However, when I would talk to different professors and explain my area of research, they would usually tell me that the thing I was talking of was not testing but it was project management, risk management or configuration management, depending on what their respective focus was. I also learned to turn the question around for them, to learn they were hoping that instead of this journey of humans and psychology I was on, they'd like me to figure out the formula to coverage of test cases from a specification.
My research focus ended up being continuous testing, and splitting testing into different level feedback cycles. I was transforming organizations with the ideas of looking at in-sync testing (things you can do in testing on the side of developing) and off-sync testing (things you felt needed to be scheduled separately), and transforming the latter to the first. Just in time for the agile to take over the world and this become a thing we were all doing.
My defining experience of the time was that anything I was thinking of teaching forward was already written by Cem Kaner. I had also read the works of visible publishing authors of the time, including Glenford Myers, Boris Beizer and Rex Black, to name the people with a heavy impact on my formative years.
Becoming a Consultant
The work at university did not pay much, and the combination of having done testing, having read about testing, and having taught and researched testing lead to being offered a better paying position in the industry. I got to being a senior consultant, and with my experience with reseach, I soon became someone the consultancy would send first to their new companies.
I had done my homework on testing, and I had established a good continuous reading and studying regime, and started doing public presentations as a way to find people interested in same kinds of topics.
My consulting work was usually in three main categories. I would do Test Process Improvement assessments for companies, using that as a form of research into the state of testing in Finland. I would go kickstart testing process improvement and testing projects with various customers, but deemed too important for opening new leads to never be allowed to properly work as tester in any of them. And to get my touch on the testing tool the company was working on, I was assigned a test manager for that product with the limitation that I had to outsource all the work on that one. I was spread thin on everything.
As I was outsourcing our own product's testing, I got to hire two lovely testers from another company. Funny enough, our consulting was doing so good we could not afford to have any of our own people away from those paying projects. The two testers had taken my testing course at the university, ones I had taught test cases to. As I shared my plan of what work we were buying and how I see it happen in an exploratory testing frame, I remember their surprise as they told me: "Great to see you are doing this in a smart way, and not the way you taught us to do it at the university."
The insight of realizing the divide on how I continued to teach testing - as it was taught in the books - and the way I continued to guide to do testing in projects become my second foundational revelation.
The Two Foundational Revelations
In my second tester job, I had come to learn that the results of testing significantly improved if I did not follow test cases - someone else's, nor my own. Finding the idea of agency, my free will and abilities to do the best job in the frame I was given, started to become evident.
And in my consulting job with a test manager responsibility, I had come to learn that I would not ask people I was expecting to do a good job at testing to create and test with test cases.
If not before, now it was evident. My work was on exploratory testing. But it would still take 15 more years for this book to emerge.
Becoming a Great Exploratory Tester
Consulting work and opening new leads gave me an unearned reputation as someone who knew testing, but I got to do so little of it that all I knew what how to create a container in which great testing would happen. I taught testing at various organizations, but I didn't really get them (or myself) to test on those course. I was well on my way of becoming a test manager, which did not fit my aspirations of becoming a great tester.
In escaping consultancy and this particular employers unwillingness to let me work as tester in any of the projects, I thought I would become independent consultant. I believed myself to be on the hight of my testing fame in Finland, and many companies expressed interest in working with me specifically. What I had not counted on was that the consultancy would give me a block list of companies I could not work with, and it included all the companies I had in mind.
The block was 6 months, and my savings did not allow me to take 6 months off work. When one of the companies on that list offered me a position for those 6 months I couldn't consult, I jumped in and found myself continue for three more years to follow.
The product company had an exceptional manager and under his guidance my work was defined to include a healthy dose of hands on testing in addition to teaching and enabling others. I worried I might not be good, but set out to learn to be as good as I can ever be.
The first 6-months project was a great mix of hands-on exploratory testing, documented after testing as test cases I would leave after, and sharing my test ideas with a developer who would turn them into automation. It was a tiny project, and we got to this way of working where I would openly share my ideas of where bugs might lurk and how I would test, and the developer would surprise me by telling I was right, too slow and he had both created little tools to help him test those and fixed the problems. The project manager was puzzled as I had so few bugs to report as a result of our very tight collaboration where my testing resulted in ideas of what more to test, rather than in finding bugs.
As I stayed with the company, I moved from the tiny project to the main products the company was creating. My past specialty in localization testing made me report huge quantities of localization bugs and a fair share of functionality bugs. I was easily leaning on to what came easy to me, enabling others, pointing out missed perspectives and finding myself avoiding solo responsibility for any of the features.
Instead, I brought in ideas like continuous releases and worked towards understanding of the importance of testing. I evaluated bug reporting systems, and brought in Jira just before I left. Every training I provided internally, I invited half of attendees from other companies as mixing up perspectives made us all better.
This era of my career enabled me to do consulting on the side, and most of my consulting was training that paid well. I used the money made on the side on travel, and found myself learning more on exploratory testing with London Exploratory Workshop on Testing (LEWT) as well as traveling to conferences to speak even though the company didn't pay my travel.
Context and How It Matters
With publication of the Kaner et al book *Lessons Learned in Software Testing*, I became curious on the idea of context-driven testing. Having researched and consulted various companies, and managed testing, I started looking for a stretch that would grow me as a tester in the context-driven mindset.
I started my context-hopping. I did a fair stint of pension insurance sector testing work, both on the contractor and customer side to get a feel of how those differ. I again learned that I was considered too valuable to be allowed to do testing work myself, and expected to lead from an arms length to the real work. I came to learn that having tested myself, I would have made a more significant impact on some moments I cared for. Instead, I supported different teams in moving from test-case based testing to exploratory testing, and introduced ways of doing context-appropriate preparation for exploratory acceptance testing and measuring the impacts of the new style to testing results.
Again following the context and idea of getting to hands on testing, I now moved to construction sector work on product development, and a team where I was the only tester. I had a manager very reluctant to allow exploratory testing, and I won him over with an experiment of categorizing my notes while testing. The construction sector work allowed me to test with a team, for years, and really evolve my hands-on testing abilities. I also got to evolve the team of developers from folks who had a high presentage of big customer-visible error messages to number of logins to folks who would look at me and know how I would like them to test to find those issues themselves.
Having found my tester feet in addition to my test manager feet, I would again try something different. I joined an organization with extensive automation approach, and started figuring out what the right contribution for someone like me was.
I noticed myself sneaking in improvement ideas one at a time from my long invisible list of things we could try, knowing the team's ability to intake ideas was limited. I would suggest the next important test case. I would suggest the next possible feature split to incremental. And I would suggest the next way to stretch the process for our results to improve. I would test, and find problems early on in features, teaching everyone willing and unwilling on what they could do and what I did do.
We worked with customers in the millions, with continuous releases to people's personal computers, with a handful of problems the customers would notice. Everything they noticed would improve the way we delivered.
I came to view exploratory testing as the frame from which we automate. Automation was our chosen pieces of documentation, but exploratory testing was where the insight of relevant ideas was created.
New Ways of Learning and Teaching
In the last five years, I found ensemble programming and ensemble testing. The idea of working as a group on a single computer, learning and contributing to the work, became my go-to method of learning fast from others but also teaching what I had known. I have learned through collaborating in paired an ensembled settings more about exploratory testing and software development, and faster, than I ever did before. We don't know what we don't know, and we can't ask for those things. Seeing them in action transfers the quiet type of knowledge so relevant for excellence in exploratory testing.
Social Software Testing Approaches became my signature move.
I don't believe any of us are ever ready. New products to test and new development teams to build those products with bring us new challenges. Starting from places low on test automation, I went to grow automation without losing any of the exploratory testing value. Instead of working with just one team, I went to working with multiple teams. Instead of transferring a team, I'm now figuring out how to transform organizations.
One person at a time. One bug at a time. One insight at a time. One impact at a time.
For me, exploratory testing is *the verb* that reminds me that we do more than follow steps even when given steps. And exploratory testing is *the noun* that reminds me of the frame of testing an organization needs to give to have excellent results.
I'm writing this book, from these experiences, to lead you faster to first exploratory testing *the verb* but then also to exploratory testing *the noun*.
Wednesday, October 28, 2020
One of my favorite quotes is:
It's not that I'm so smart, I just stay with the problems longer. --Albert Einstein
This idea has been central over the years as I have worked on understanding what it is that we call "Exploratory Testing". It's not a haphazard smart clicking thing, but it is learning deeply while testing.
We do our best work in Exploratory Testing when we stay with the problems longer. When we follow through things others let go, and investigate. When we dig in deeper until we understand what those symptoms mean for our projects.
Luck - or rather serendipity, lucky accident - does play a role in testing. As we spend time focusing on coverage, we give our systems a chance of revealing information we did not plan for. Sometimes, but rarely, that happens as soon as we get started. Sometimes, but rarely, that information is so valuable it is worth our entire year's salary. That's a myth, and this post is motivated by a colleague who is using that myth to sell a training course I now categorize as snake oil.
In a profession where we seek what is real and what is not through empirical evidence, falling into promises like "I teach you how to find a bug worth your entire year's salary in an hour" are just as bad as the excessive promises tool vendors do on test automation.
In words of a super-successful golfer:
The more I practice the luckier I get -- Arnold Palmer
Spend time to be lucky in Exploratory Testing. A gamble-based approach to exploratory testing is off and encourages the idea that thoroughness isn't the goal. Serendipity is real, and it becomes real by staying with the application longer and using test automation to extend your reach in all kinds of directions.
And yes, I too have found a significant bug in an hour after joining a project. That says NOTHING about me and my abilities as an exploratory tester, and a lot about the project. Let's change projects so that they make us need to be a lot less heroic and a lot more investigative.
Tuesday, October 27, 2020
A week ago, I was reading the news in Finland to learn that a major psychotherapy service provider, Vastaamo, had received a ransom note from someone in possession with their patient database. I could guess I would soon find myself a victim, and a few days later on Thursday, that's exactly what I was told. The event unfolded some more when on Saturday I, like apparently tens of thousands of others, received a marketing-style personalized ransom email asking me to pay.
I'm lucky - whatever discussions I have had there have already seen the social media and just filing in a crime report on the ransom was a no-brainer.
My first reaction was to be upset with Vastaamo for doing a crappy job protecting our information, as the criminal's messages implied that the reason they had the information was that the database was left online, with root:root access credentials. An open door, yes, but not an excuse for stealing something private, and even less of an excuse to blackmail folks.
My fascination for this case comes from being a professional tester. With the 25 years of working, I have been a part of reporting and getting fixed hundreds, most likely thousands of vulnerabilities. Even the problem of weak password for relevant data in production, there's been more of that than I care to count. There's been protecting admin interfaces by thinking a secret address that we only know would protect it. There's been great plans of security controls, that turn out to be just plans but not turned reality. Well planned is not half done, it isn't even started.
Bad protection shouldn't happen, and I would love to say you need folks like myself, aware of the issues around security and keen to follow through to practice to not leak through something this stupid. I even made the claim that this is level of protection for *health records* is against the law as I filed that complaint last Thursday on Vastaamo. But bad protection happens, and all it takes is, like the now-fired-ceo of Vastaamo claims, a human error. And perhaps, deprioritizing work that would cover at least the basics of security controls.
As time passes and news unfolds, my focus turned on my annoyance on how the news reports on when the company knows their data was stolen.
We need to separate, on a timeline, a few concepts.
- The Vulnerability is the open or insufficiently locked door
- The Breach is the moment someone walked through that door
- The Ransom is the moment when they used the data illegally in their possession for further steps
- The vulnerability was fixed in March 2019 and this is how they know data after that haven't leaked (for this particular incident)
- You can't fix a thing you don't know is broken. So they know of the vulnerability even if they don't know of the breach.
- The ransom requests were reported to police in September 2019 and this is how we know when the company knew they had been breached for a fact.
- The breach could have happened any time the vulnerability was there and we have been given two points of when the data was accessed. We are told the latter is something the company figured out in their security audit activities (which lead to fixing the vulnerability). We don't know if the company knew of November 2018 breach before the ransom request.
Monday, October 19, 2020
Wednesday, October 7, 2020
Year after year, organization after organization I join in anticipation of not having to see test cases other than those that are automated, and through continuous execution guide the process in keeping themselves up to date.
Yet year after year, organization after organization I learn people still write test cases. Those things where there's a title and steps. Those that set out a flow through the application that needs to be verified, and steps you can choose to disobey because you are not a robot.
The way I look at it, ideas are cheap, and we don't care much on how well they are documented. The ideas on post-it notes I find often difficult to decipher in a month, but they are critical notes when I'm learning to structure things in a way that I can recall later. Test cases are something we may want to keep around for later, they are more than ideas. They have structure that supports executing them, even if they were checklist-like. They often include steps and an idea of an order of execution. Test cases are better considered output of testing (and automated!) than an input to testing.
The thing is, some of our worst experiences will never get published, because we can't talk about them while we are in the middle of them. Inspired by conversations today, I go back to an experience I could not talk about back when it happened, but I can today, with examples.
I was working with a product / consulting company, and the consulting side I was representing was doing quite well - so well in fact that it was hard to make space for any of our testing experts to do testing on our own product. The consulting paid so much better. The management pulled one of us consultant out to test release 1, another to test release 2 and so forth. Eventually, I was next in line.
Thinking about it 20 years later, it is hilarious to realize I already was presenting as the resident testing expert. Going in to test the product, I intended to do well - don't we all. I asked what the ones before me had done, and was pointed to a test management tool, by the proud colleagues who had made sure they documented their testing.
These are actual samples of what I received.
These were considered test cases. They are a lot worse than some of the better test cases I have seen over the years, but even the better ones come with the universal stigma of not being useful for good testing.
These test cases were awful. They still are awful. Time did not make them better, only worse. They were step by step instructions, with a lot of effort in faking testing by describing elaborately the same steps just so that the tester using them would know they've for example moved a box up, down, right and left. They had magic values defined that completely miss the point of why those values were selected, but I can only guess the choice of data in these reflected naming by concept, not naming for likelihood of finding problems or even pointing at ideas where problems might be.
On my round, the first thing I did was throw these away. These were done by my respected colleagues, and they did the best work they thought of with the situation they had at hand. I could only hope they reflected test cases, not the testing that was done.
I moved testing to exploratory. No more test cases. Closest we would get was expecting a checklist of features and ideas after it was all learned and structured through multiple rounds of rehearsing through actual testing.
While what I run into in organizations nowadays is generally not this bad, it isn't much better either. I've shown it again and again that dropping test cases has improved testing. The controlled experiment of four weeks where we used pre-designed cases for two finding zero problems and using freeform exploratory testing with prepared test data for other two finding all the problems there was to report from that acceptance test. The removal of 39 pages with 46 "test cases" where 3 pieces of information were something I did not know joining a new company on week 1. The others where I did not do a public presentation I can go dig out years later for numbers I was sharing.
I wish the world was ready for good testing, but it still isn't. Automating and working through ideas of better and good seem like our best hope. And I'm delighted that test automation is merely a smart way of documenting your testing.