Showing posts with label Test Automation. Show all posts
Showing posts with label Test Automation. Show all posts

Tuesday, November 20, 2018

Stop Analyzing, Start Automating

I see systems. I guess we see things we like seeing, and I like seeing how the bits and pieces connect, what is clear and what is wrapped in mystery of promises of more learning in the future. I like seeing value, and users and flows. And pieces alone are part of that flow, but the promise comes together with the system.

For years, I've tested systems. I've figured out ingenious ways of seeing what changes, learning heuristics of what changes matter, all grounded on knowing why would anyone want to use this? Every moment testing an individual piece, as an exploratory tester, connects somehow to a greater purpose in the context of the system.

When I worked with a team of 10 developers with their only tester, we were doing daily releases without test automation, and it worked great. It worked great into slowly but steadily introducing test automation. But even without test automation, in contained the size of change. Each change would flow isolated through the pipeline with the manual steps. Just like coding was manual, testing was that too. Think, test, implement, test, think, release - a steady flow of features of value.

But now, the scale is different. Where I had 10 people before, I now have 100. And 100 developers, doing non-isolated changes merging to trunk as soon as they think they're ready is change at a pace one tester, even with  the ingenious ways of seeing things and knowing things, it is just too much. This is where test automation as documentation comes in. With executable documentation, test automation frees my energy to analyze on top of it, not all of it. I no longer need to analyze details, but trends. Clusters of changes. Driving forces for those changes. Risks in the system, and risks in the people creating those systems. Automation catches some of it - quite a lot of it. And what it does not catch, is a chance of identifying what the automation is missing. To document with test automation.

I find myself in places where automation at first is more of a wishful thinking than actual net of coverage. But learning, every day, and documenting with automation, it grows every day.

My analyzing changes on backlog visualization. If I can fix and forget, I would go there. But sometimes things need bigger focus. And as an exploratory tester and a system tester, I see what we miss. I label it, and ask for it.

I wouldn't know how to connect this stuff with reality if I did not spend time, hands on, with the systems we're building. The product works as external imagination, making my requests of what should be tested more practical. And while I prepare for the automation work, I just so happen to have already tested without the automation, found some problems and gotten them fixed.

We emphasize automation, for a reason. But in addition to folks who automate, we need folks who care for identifying things that take us further, make our automation do real testing. Not end to end, but covering a web of granular feedback mechanisms, so that we know when things are not right.

Wednesday, November 7, 2018

Achievements of a Silo

Once upon a time there was a company, much like many other companies yet unique in many ways. As companies do, they hired some great people in different teams. There was one thing in common: all the people were awesome. But the people came from very different backgrounds and ideas.

In one of the teams where some great people in testing landed, the testers were feeling frustrated. With a new team and no infrastructure for builds and test automation yet features flying around being implemented and tested, they found it hard to take the time and focus they felt they needed. So as some great people do, they actively drove forward a solution: they created a new team on the side, with focus on just creating the infrastructure and dropped all the in-team work of testing they had managed to get started. Without facilitation, the in-team testing work turned into tiny, focused on units and components, and perspectives around value and system vanished in hopes of someone else picking them up, like magic.

With the new team and new focus, the great people made great progress. They set up a fancy pipeline with all sorts of fancy tests, and a lovely set of images and documents to share what a great machinery they had built. Where ever this new team showed up, they remembered to tell how well they did, and all the awesome stuff the machinery now made available with sample tests of all sorts that the pipeline theoretically should hold.

The original team focusing on features were handed the great machinery with high hopes for expanding it. The machinery building team built more machinery, on the side of the machinery being used for real projects.

The fun part of this fable arrives after many months has passed. The overall project with the lost focus of who owns system perspectives was struggling a bit, and it became obvious that getting a perspective into readiness wasn't an easy task. So as companies do, a meeting was called.

In the meeting, the machinery team presented all the great things they had built, and great they were. With every example of what was built as example into the machinery, the team focusing on features brought today's reality. That test job - turned off as it broke. Same with the other. And another. All the great things the machinery promised, none of it was realized in practice.

Lesson of this story is: it's not about your team's output, but about the outcome of all the different teams together. You can create the shiniest machinery there is, but if it is not used, and if relevant parts of it in real use get turned off, your proof of concept running all the shiny things provided very little value. It may have taught the great people in the machinery team some valuable personal lessons in the technical perspective. What it should teach is that value of whatever we are building comes from the use of it.

I'm a big believer of teams actively participating in building their continuous integration machinery, and slightly loath people who believe that learning together while building it, while taking it into use isn't needed because someone else could do the learning for you.

Learning with you is possible, for you is not. Achievement in silo often end up worth a little. 

Tuesday, September 11, 2018

Tests Surviving Maintenance

As we create tests to our automation suites, we put some care and effort into the stuff we are creating. As part of the care and effort, we create a visualization on a radiator on how this test is doing. The blue (sometimes yellow/red) boxes provide a structure around which we discuss making a release with the meaning of basic things still working.

When something turns not blue and the person who created the piece of code is around, they usually go tweak it, exercising some more care and effort on top of the already sunk cost. But a repeating pattern seems to be that if the person who created the piece of code is not around, the secondary maintainer fixes things through the method of deletion.

I had a favorite test that I did not create. But I originated the need of it, and it was relevant enough that I convinced one of my favorite developers (they're all my favorite, to be honest) to create it for me. It was the first step to have a script doing something a colleague was doing over long term, just opening the application and seeing if it was still alive.

I cared for that test. So I tested what would happen if the machine died by letting the machine die. 
The test, like so many other tests, did not survive maintenance. Instead of bringing back the machine it was watching, the watcher vanished.

As I mentioned this on twitter, someone mentioned that perhaps the test was missing clarity of intent. However, I think the intent was clear enough, but debugging for a vanished machine was harder than deleting the test. Lazy won. The problem might have been lack of shared intent, where people have the tendency of maintaining other people's stuff through deletion. The only mechanism I've seen really work on shared intent is mobbing, and it has significantly improved in previous cases the chances of other people's tests surviving maintenance.

Lazy wins so easily. Failing tests that could prevent deploys - for the reason of them showing the version should not be deployed - get deleted in maintenance. It's a people issue, not a code issue.

We need blue to deploy. Are we true to ourselves in getting our tests to keep us honest? Or do we let tests die in maintenance more often than not?

Friday, September 7, 2018

Three Kinds Of Testing

This week brought me a couple of reminders of a past I wish I had left behind, but one that is still very much day to day to some other testers. This is a past of writing test cases. And when I say writing test cases, I mean the non-automated kind. The documents that help drive testing. The idea that if only we wrote them well enough, anyone could pitch in to any of the testing.

There are organizations that put careful thought into their test case documentation. I'm lucky to be in an organization that puts careful thought into their test execution with emphasis on learning.

Some weeks ago I tweeted that I don't think we need to use both automated and exploratory testing because these are not the same. With this weeks realizations, I think there is three kinds of testing.

There's the kind of testing I work with. I would call that exploratory testing. It engulfs smart use of tools, programming and even regression test automation in a frame of learning.

There's the kind of testing that test case folks work with. I would call that manual testing. It includes creation of manual procedures for testing, with emphasis on planning ahead of time not so much on learning.

And then there's the kind of testing that all too many test automation folks do. They take a manual test idea, turn it to automation so that whatever is hard is left out. They take their "designs for tests" from somewhere outside their own work.

The first kind is really the only kind. And people doing that kind of testing may identify as testers, test automation specialists, or software developers. It's not about the role, but about the mindset of learning through empirical evidence that seeks to disprove to build a stronger case for the idea that things might work after all.

Tuesday, March 27, 2018

The Test Automation Trap

There's a pattern that we keep seeing in "agile" projects again and again.

We work together as a team to implement a feature. We automate tests for that feature as part of its definition of done. As end result, we have some more tests than before, on all layers of tests. We get the tests run blue and we make a release.

We work together to implement a feature. The previously added tests make our tests run in all lights of a christmas tree, and in addition to adding the new tests for new functionality, we clean up the previous tests.

The longer we continue, the worse the christmas tree lights get. The more time we spend on fixing the past tests, the less time we have on the new tests. And we take shortcuts on our past tests fixing, just removing the ones we deemed so necessary before.

And no one talks about it. It is a ritual that we must go through. Like a rite of passage.

Over time no one cares about how well the automation tests things. All we care for is that it passes for us to get through the gate.

I've seen so many people trapped in the cycle of being too busy to think about *why the tests exists* and *what value are they really giving us*. These people have no time for manual testing, because - very honestly - automation eats up all their time. And they might not even see that the approach is not really working out for them.

The test automation trap creates testing zombies. Ones that make the moves, but that stopped learning on what they're doing.

The best way I know out of the trap is to start caring about testing again. Put testing, not the scripts, into the center. It's time to talk about risk and strategies again. It's time to build up a test automation asset that supports whatever strategies you're going for. Stop moving through the motions, and think. Learn. Look at where your time goes. Experiment your way out of the trap of magical moves that feel better idea than they are.

Wednesday, February 7, 2018

Driving test automation forward as a product

I'm in a middle of a very complicated relationship, best defined by love-hate. On some aspects of it, I just LOVE what we've done. Yet on other aspects, I HATE where we are. It feels both a little schizophrenic and balanced. And I'm talking about the test automation I work with.

I work with it by being on the sidelines. I know I can step in whenever I feel like it, but no one requires me to. I can look at it both as an insider and an outsider. My place and position is unique. I find that I see things others don't pay attention, and my attention brings out things others wouldn't be paying attention otherwise. And I share about this position for you, my dear reader, because there's something you could consider here:

  • if you are deep into automation, what a step back can give you as perspective
  • if you are not deep into automation, what you can make sense of just by seeing concepts and reading code "as if it was English"
I'm working out my relationship with test automation because I'm no longer ok with test automation doing a bad job at testing or myself being a blocker for others by focusing on what it cannot do over what it can do. 

There's things that I love, and where other people's appreciation helps me appreciate things more. 
  • Our ability to run automation that kicks off 14 000 clean OS instances up and down a day is quite an achievement, and that from "I want a clean OS to install on" to "I can start installing", it is a matter of a few seconds. 
  • When a new person joins and isn't left to discover the environment on their own, it takes a day to get started. Comparing this to new person joining discovering it on their own being weeks, basic proficiency being closer to 6 months I'm even a keener fan of pairing new hires for their first tasks. 
  • It runs and it is kept running. It enables releasing in a way products of this complexity could not be released without it.

    There's things that I hate, that others seem to hate much less.
    • It guides new hires to create a corner of their own over sharing common assets
    • It has tons of embedded decisions over time that allows others to be judgmental about "not doing things right" for later hires
    • Reuse of things has a manual coding element, taking days of coding to just introduce a concept like "same tests to another environment".  And people rather spend the days on the manual task than create an abstraction. 
    • People think of it as "testing a lot" because it runs often even if for a very limited set of things to test. It distorts *managers* concepts of how well we've tested, when same thing 1000 times is not 1000 times more testing for real.
    So when I said I will reframe myself as an architect, I find I reframe myself first as test automation architect. I choose to work on things that drive the overall structures for the better. And just expressing things I would like to see us work on brings me to an interesting place of shining a light on things that have been that way. 

    Since I don't still end up dwelling in the code and implementation details all my days, I see concepts. I see that there's tests that are small (that I want more of) and tests that are large (that I want less of) - and I see that the structure does not help me see them. I see tests, test specific methods and common methods, that again the structure does not help me see them. I see products, applications and components, and that again the structure does not help me see them. I see similar use of resources, like having malware samples, temporary data and persistent data, and I see that the use of those isn't consistent.

    I'm in a place where I have the vision of where we might head to for good or better, with limited ability to implement it all by myself. I might be paralyzed by my abilities alone, but others with different abilities may be paralyzed by not seeing things I see, or requiring things I require. In the last three years, I've acquired a superpower that allows me to still do much about this: pairing and mobbing.  That superpower, in addition to making it possible to turn my great ideas into code, gives us all a chance of learning together. And I'm looking forward to it.

    Test automation is a product that tests our other products. Caring for overall quality of it is just as necessary as caring for the details of each test. 

    Tuesday, January 23, 2018

    Test Automation Smells so Obvious Anyone Could Notice

    Like so many exploratory testing enthusiasts before me, I can easily find things to do with my time so that I don't have to go dig into the realms of test automation. But every now and then, unlike so many exploratory testing enthusiasts, I go and take a dig at automation anyway. It reads mostly like English anyway.



    Here's my list of things to work on inspired looking at one set of tests today.

    Tests that don't test only tour

    So you have some kind of structure with your tests. You have test suites / sets somewhere that bundle things together. You have some tests going into those suites /sets. And you have some libraries you can use. Great start. But take a look at the things in this that are considered tests. Can you find any that haven't got a single verification, asserting that something must be true? When you do, these are tests that don't really test, they tour. They are a path to getting to a place where you actually want to do some testing. Don't muddle them together on the same level as things that actually are checking things.

    Tests inheriting tests over libraries

    Inheritance can make things very muddled for  your random stroller. You're trying to read steps of what happens, and for sake of similarity, someone came up with the great idea of inheriting and overriding to create the differences. Same things would appear like they could be done with libraries, instead of test cases. Do you really have to conceptually mix up test cases with inheritance?

    Randoms inside a test

    You see a test that says TestABC. Great, it looks like this is a test that tests ABC. You continue reading, and a call to random pops in front of you. Whenever this test gets run, it always does ABC but there's a few different ways to do ABC. Someone decided to leave which of the ways you get to faith, making sure every time it fails you need to go and check which of the options was broken. Isn't there enough lack of deterministic behavior in the tests without adding randoms in?

    Avoiding passing values

    Reading the code, you start realizing a whole lot of method calls look like they're the same yet they are not - quite. There's some little detail on each that differs. Your method could take an argument, and you could do some magic with the argument. But for some reason it seems it was a better idea to create a number of separate methods to call, without argument but each including what looks awfully lot like a thing that would have belonged in the place of an argument in its name.

    Duplicating to my corner

    There's a directory there somewhere, that uses a label you recognize as a concept meaning ownership that draws you in. And you find a nice cozy corner of clearly laid out tests. Reading through their names, you start to feel like you've seen some of them before. Searching confirms - there's a well organized little corner there somewhere what everything is concisely together. There however is a lot of duplicated code elsewhere. But that must be someone else's problem.


    A Fool's Coverage

    As an exploratory tester, you start realizing how much coverage there could be if the collaboration between what you learn and know would better turn into automation. You identify some of the English in the tests that makes sense in the world of using the application for exploring it, and realize how little of your ideas have ended up encoded into the tests. Whatever is in, gets continuously monitored. Running the same thing a thousand times isn't really yet that much in the coverage - it's a version of a fool's coverage.



    Here's a thing: anything I can name and recognize, I can fix. And while these feel obvious to me today, they clearly have not been equally obvious when they were introduced. What does your "let's fix these" list look like?


    Friday, January 12, 2018

    All I got for a week of programming was one lousy test script


    From the title, you might think the post is about venting on how slow it is to learn automation. If that's what you are looking for, this is not that post. Instead, this is a post about insights of what happens while we program test automation.

    There was a fairly simple end to end scenario that needed testing. The tool of choice was Python, and examples of doing something fairly similar were plentiful. 

    To maintain the focus, the scenario was first drafted just as code comments. The steps the script should go through. The verifications that needed to happen along the way. The way we would determine what to make note of while the test was running, and what would be things that need to stop the test from proceeding as it just makes no sense. 

    It could all be very simple, except it almost never isn't. 

    First of all, to figure out the scenario, some details of what to check require the external imagination: product we are testing. Seeing the details of what could be verified need hands on the computer. We could call that automating, but what we actually do is mostly manual. Sometimes we can run the start of the script to get to the point of pondering. But  the pondering is still a manual process. We look at what is available that we could programmatically access. We think of what is good enough to determine if things work, and how the actual application would allow us to see things with code. 

    As we get to a manual process, we learn that while I wanted to do a thing, for some reason it does not work. We find bugs. Some of the bugs we notice when we just run through the scenario manually. Other bugs we notice, because automation is picky. Where a person can just work around some deficiencies, automation may get us momentarily stuck. Something else needs changing before the script can proceed. And we end up with todo-markings in our automation code, even fixing the problems on the application ourselves just to be able to make progress. 

    Towards the end of the week, multiple little learnings later with blocking bugs fixed, we finally get the script to a point where it runs in its intended scope. Allowing then to think outside this little agreed box that took the whole week, there's more. But also, just going though this one scenario is already making the work of adding another easier. There will again be bugs, but they will be different. The scenario we already automated gets run since its introduction, alerting us on possible regressions. 

    I write this post because I read that "Testing as an exploratory, investigative activity, cannot be replaced by automated checks". It bothers me how often we testers say this. The automated checks are done by people too. The human part of a check precedes creating the automation that successfully executes things. It grows as we add more checks. Many times when automating, we need to look with more detail. 

    The risk to good testing isn't in including automation into the way we work. It is in not looking wide if automation gives you the sense of already covering what ever scenarios are relevant. The risk is the automators who say "this is fully tested" when there really is one happy day scenario with one set of very limited data and selections. 

    Automation has so much power as a way of executable documentation.

    Tuesday, September 26, 2017

    Mob Programming on a Robot Framework Test

    A year ago as I joined my current place of work, I eagerly volunteered to share on Mob Programming in a session. As I shared, I told people across team lines that I would love to do more of this thing here. It took nearly a year for my call to action to sink in.

    A few weeks back, a team lead from a completely different business line and product than I work on pinged me on our messaging system. They had heard that I did this thing called Mob Programming, and someone in the team was convinced enough that they should try it, so what then should be the practical steps? They knew they wanted to mob on test automation, and expressed that a challenge they were facing was that there was one who pretty much did all the stuff for that, and sharing the work in team was not as straightforward as one might wish for.  Talking with three members of the team including the team lead and whoever had drawn me in, we agreed on a time box of two hours. For setup, one would bring in the test setup (which was a little more than a computer as we were testing a Router Box) and ideas of some tests we could be adding.

    It took a little convincing (not hard though) to get the original three people to trust me with the idea of not having first a session of teaching and introducing the test automation setup, and that we would just dive in. And even if we agreed on that in advance, the temptation of explaining bits that were not in the immediate focus of what the task drove us towards was too much to resist. 

    The invitation for the team of 7 included “test automation workshop”, without a mention of mechanism we would be using on this. And as we got to work on adding test, I asked to figure out who knew the least and made them sit in front of the keyboard first, just saying the rule of “no thinking on the keyboard”. I also told them we’d be rotating on 3 minutes, so they would all get their chance on the keyboard and off, except for the one automation expert. Their rule was to refrain from navigating unless others did not know (count to three before you speak). 

    Looking at the group work for 1,5 hours was a delight. I kept track of rotation, and stepped into facilitating only if the discussing got out of hand and selecting between options was too hard. I noticed myself unable to stop some of the lecture-like discussions where someone felt the need of explaining, but the balance of doing and talking was still good. People were engaged. And it was clear to an external person that the team had strong skills and knowledge that was not shared, and in the mob format insights and proposals of what was “definition of done” for the test case got everyone’s contributions.

    I learned a bit more on Robot Framework (and I’m still not a fan of introducing a Robot language on top of Python when working with one still would seem a route of less pain). I learned of use of Emacs (that I had somewhat forgotten) and could still live without. I learned on different emphasis people had on naming and documentation on tests. I learned on their ideas of grouping tests into suites and tagging them. I learned of thinking in terms of separating tests when the Do is different, not when Verify needs a few checks for completeness. I learned to Google Robot Framework Library references. And I learned that this team, just like many other teams at the company, is amazing. 
    Asking around in retro, here’s what I got from the team: This was engaging, entertaining. We expected to cover more. The result of what we did was different than the first idea of what the test would be, and the different means better in this case. 

    My main takeaway was to do this with my team on our automation. If I booked a session, they would show up. 

    Thursday, September 21, 2017

    What makes a test automation expert?

    I was part of a working group that created an article called 125 Awesome Testers You Should Keep Your Eye on Always. It may not be obvious, but that list is a response to another article called 51 automated testing Experts You Should Keep Your Eye on Always. That list had only four women (at least it had four women!) and let me tell you a big public secret:
    It is not because there aren't many awesome women in automation. It is because people don't look around and pay attention.
    I could have many different criteria on what makes a test automation expert:
    • Speaks about test automation in public (conferences, articles) in a way that others find valuable
    • Does epic stuff on making automation work out and do real testing
    • Is identified as a creator of a test automation framework or library
    • Speaks only of automation and never in a manner that addresses its limits
    The 125 awesome testers list does not identify automation separately, because I find that most people contribute to test automation in a significant way. Not all of people in either one of those lists have created an open source tool of their own. Not all people on either one of those lists write test automation code as their main thing.

    We can be awesome at automation in so many ways. Writing code alone in a corner is not the only way. Many of us work in teams that collaborate: pair, or even mob. Coding is not the only way to do automation.
    • Delivering insights that are directly transferable to useful test automation is a way of doing automation. 
    • Working on the automation architecture, defining what we share is a way of doing automation.
    • Helping see what we've done through lenses of value in testing is a way of doing automation. 
    • Reading code without writing a line and commenting on what gets tested is a way of doing automation. 
    • Pairing and mobbing are ways of doing automation.
    We don't say coding is all there is to application development, why would coding be all there is to  test automation development?
    There's been a particular experience that has shaped my experience around this a lot, which is working with mob programming.  After programming with 14 different programming languages, I still identified as a non-programmer because my interests were wider. I actively forgot the experience I had, and downplayed it for decades. What changes me was seeing people who are programmers in action. I did not change because I started coding more. I changed because I started seeing that everyone codes so little. 

    The image below is from a presentation of Anssi Lehtelä, a fellow tester in Finland who has also now two years of mob programming with his team under his belt. A core insight I find we share is that in coding, there is surprisingly little of coding. It's thinking and discussions. And that's what we've always been great at too! And don't forget googling - they google like crazy!

    Lists tell you who the list maker follows. Check if you have even a possibility to recognize the awesome women in automation using http://proporti.onl on your twitter feed. It can be brutal. Mine is 53 % women. In the numbers I can follow, there's easily a brilliant, inspirational woman to match every single man. In any topic, including automation. Start hearing more voices.

    Tuesday, August 22, 2017

    A look into a year of test automation

    It's been a year since I joined, and it's been a year of ramping up many things. I'm delighted about many things, most of all the wonderful people I get to work with.

    This post, however, is on something that has been nagging on the back of my head a long time, yet I've not yet taken any real actions on doing anything other than thinking. I feel we do a lot of test automation, yet it provides less actionable value that I'd like. A story we've all heard before. I've been around enough organizations to know that the things I say with visibility into what we do are very much the same in other places, with some happy differences. The first step to better is recognizing where you are. We could be worse off - we could be not being able to consider where we are with evidence of things we've already done.

    As I talked about my concerns out loud, I'm reminded of things that Test Automation has been truly valuable on:
    • It finds crashes where human patience of sticking around long enough will not do the job, and makes random crashes into systematic patterns with saving results of various runs
    • It keeps checking all operating systems where people don't do that
    • It notices side effects on basic functionality in an organization where loads of teams commit their changes on the same system without always understanding dependencies
    However, as I've observed things, I have not seen any of these really in action. We have not  built stuff that would be crashing in new ways (or we don't test in ways that uncover those crashes). We run tests on all operating systems, but if they fail, the reasons are not operating system specific. And there's much simpler tests than what we run to figure out that the backend system is again down for whatever reason. Plus, if our tests fail, we end up pinging other teams on fixes and I'm growing a strong dislike on the idea of not giving these tests for the teams themselves to run that need pinging.

    Regardless of how I feel, we have now invested one person and a full year into our team's test automation. So, what do we have?

    We have:
    • 5765 lines of code committed over 375 commits. That means that we do 25 pull requests a month, of average size 15 lines per commit.
    • The code splits into 35 tests with 1-8 steps each. My reading perception is that I'm still ashamed to call the stuff these tests do testing, because they cover very little ground. But they exist and keep running.
    • Our test automation python code is rated 0.90/10 with Pylint. The amount of complaints is  2839 things. That means that every second line needs looking into. The number is worse as I did not set up some of the libraries yet.
    In the year, I cannot remember more than one instance where the tests that should protect my team (other teams have their own tests) have found something that was feedback to my team. I remember many cases where while creating test automation, we find problems - those problems we could find also just diligently covering manually the features, but I accept that automation has the tendency of driving out the detail.

    I remember more cases where we fix automation because it monitors things are "as designed" but design is off.

    I know I should do something about it, but I'm not sure if I find that worth my time. I prefer the manual approach most of the time. I prefer to throw away my code over leaving it running.

    There's only one thing I find motivation in while considering I would jump into this. It's the idea that testers like me are rare, and when I'm gone, the test automation I help create could do some real heavy lifting. I'm afraid my judgement is that this isn't yet it. But my bar is high and I work to raise it.

    As I write this post, I remind myself of a core principle:
    all people (including myself) do the best work they can under the pertaining circumstances.
    Like a colleague of mine said: room for improvement. Time to get to it.

    Saturday, July 22, 2017

    Automation tests worth maintaining

    A retrospective was on it's way. Post-it's with Keep / Drop / Try were added as we discussed together the perspectives. I stood a little on the side, being the loud one, leaving room for other people's voices. And then one voice spoke out, attaching a post-it on the wall:

    "It's so great we have full test automation for this feature"

    My mind races. Sure, it's great. But the automation we have covers nothing. While creating it for the basic cases, we found two problems. The first one was about the API we were using being overly sensitive to short names, and adding any of those completely messed up the functionality. I'm still not happy that the "fix" is to prevent short names that otherwise can  be used. And the second one was around timing when changing many things. To see things positively, the second one is a typical sweet spot for automation to find for us. But since then, these tests have been running, finding nothing.

    Meanwhile, I had just started exploring. The number of issues was running somewhere around 30, including the announce of the "fix" that made the system inconsistent and I still deem as a lazy fix.

    I said nothing but my mind has been racing ever since. How can we have such differences of perspectives on how awesome and complete the automation is? The more "full" it's deemed, the more it annoys me. I seek useful, appropriate and in particular over longer time not just on time of creation.  I don't believe full coverage is what we seek.

    I know what the automated tests test, and I often use those as part of my explorations. There's a thing that enables me to create lists of various contents in various numbers, and I quite prefer generating over manually typing this stuff. There's simple cases of each basic feature, that I can run with scripts and add then manually aspects to what I want to verify in exploration. I write a lot of code, extend what is there but I rarely check in what I have - only if there was an insight I want to keep monitoring for the longer term future.

    Cleaning up scripts and making them readable is work. Maintaining them when they exist is work. And I want to invest in that work when I believe the investment is worthwhile.

    The reason I started to tell this story is that I keep thinking that we do a lot of harm with the "manual" vs. "automated" testing dichotomy. My tests tend to be both. Manual (thinking) is what creates my automation. Automation (using tools and scripts) is what extends my reach in data and time.

    Tests worth maintaining is what most people think with test automation. And I have my share of experience of that through experimenting with automation on various levels. 

    Wednesday, May 24, 2017

    Impact of Test Automation in my Everyday Worklife

    I'm not particularly convinced of the testing our teams test automation does for us. The scenarios is automation are somewhat simple, yet take extensive time to run. They are *system tests* and I would very much prefer seeing more things around components the team is responsible for. System tests fail often for dependencies outside the team control.

    I've been actively postponing the time of doing really something about it, and today I stopped to think about what existence of the minimal automation has meant for me.

    The better test automation around here seem to find random crashes (with logs and dumps that enable fixing), but that is really not the case with what I'm seeing close.

    The impact existence of test automation has had for my everyday work life is that I can see with a glimpse if the test systems are down so that I don't need to pay attention to installing regularly just to know it still installs.

    So I stopped to think: has this really changed something for me, personally. It has. I feel a little less rushed with my routines. And I can appreciate that.

    Monday, March 27, 2017

    The Myth of Automating without Exploring

    I feel the need of calling out a mystical creature: a thinking tester who does not think. This creature is born because of *automation*. That somehow, because of the magic of automation, the smart, thinking tester dumbs down and forgets all other activities around and just writes mindless code.

    This is what I feel I see when I see comparisons of what automation does to testing, most recently this one: Implication of Emphasis on Test Automation in CI.

    To create test automation, one must explore. One must figure out what it is that we're automating, and how could we consistently check the same things again and again. And while one seeks for information for the purposes of automation, one tends to see problems in the design. Automation creation forces out focus in detail, and this focus in detail that comes naturally with automation sometimes needs a specific mechanism when freeform exploring. Or, the mechanism is the automation thinking mindset. 

    I remember reading various experience reports of people explaining how all the problems their automation ever found were found while creating the automation. I've had that experience in various situations. I've missed bugs for choosing not to automate because the ways I chose to test drove my focus of detail to different areas or concerns. I've found bugs that leave my automated tests in "expected fail" state until things get fixed.

    The discussion around automation is feeling weird. It's so black and white, so inhumane. Yet, at core of any great testing, automated or not, there is a smart person. It's the skills of that person that turn the activity into useful results. 

    Only the worst of the automators I've met dismiss the bugs they find while building the automation. Saves them time, surely, but misses a relevant part of feedback they could be providing. 


    A Regular Expression Drive-By

    I was working in strong-style pair on my team's test automation code last week, to assess candidates to help us as consultants for a short timeframe of ramping up our new product capabilities. The mechanisms of "an idea from your head to the computer must go through someone else's hands" lends itself well for assessing both skills and collaboration. At first, I would navigate on the task I had selected - cleaning up some test automation code. But soon, I would hand the navigation over to my pair and be the hands writing the changes.

    There was this one particular line of code that in both sessions caught my eye and was emphasized by the reaction of my pairs: "This should have a code comments on it", "Ehh, what does this do, I have no idea!". It was a regular expression verifying if a message should be parsed to passed or failed but the selection of what the sought for keyword was was by no means obvious.

    I mentioned this out loud a few days later, just to seek for confirmation that instead of the proposed code comment, it should really just be captured in a convenience method that would have a helpful name. But as we talked on the specific example, we also realized that it would make sense to add a unit test on that regular expression to explain the logic just a bit more.

    The unit test would start failing if for any reason the messages we used to decide on pass/fail would no longer be available, and would be more granular way of identifying where the problem was than reading the logs of the system test.

    A regular expression drive-by made me realize we should unit test our system tests more. 

    Tuesday, February 21, 2017

    It's all in perspective - virtual images for test automation use

    I seem to fluctuate between two perspective to test automation that I get to witness. On some days (most) I find myself really frustrated with how much effort can go into such a little amount of testing. On other days, I find the platforms built very impressive even if the focus of what we test could still improve. And in reflection to how others are doing, I lower my standard and expectation for today, allowing myself to feel very happy and proud of what people have accomplished.

    The piece that I'm in awe today is the operating system provisioning system that is in the heart of the way test automation is done here. And I just learned we have open sourced (yet apparently publicized very little) the tooling for this: https://github.com/F-Secure/dvmps

    Just a high level view: imagine spawning 10 000 virtual machine for test automation use on a daily basis, with each running some set of tests. It takes just seconds to have a new machine up and running, and I often find myself tempted to use on of the machines for test automation, as the manual testing reserved images wait times are calculated in minutes.

    With the thought of perspectives, I go for doing a little more research on how others do this. If you're working on scales like this, would love to benchmark experiences.

    Monday, January 30, 2017

    What's worth repeating?

    This is again a tale of two testers, approaching  the same problem with very different ways.

    There's this "simple" feature, having more layers than first meets the eye. It's simple because it is conceptually simple. There's a piece of software in one end that writes stuff to a file that gets sent to the other end and shown on a user interface. Yet, it's complicated looking at it from just having spent a day on it.
    • it is not obvious that the piece of software sending is the right version. And it wasn't due to an updating bug.  Insight: test for latest version being available
    • it is not obvious that whatever needs to be written into the file gets written. Insight: test for all intended functionality being implemented
    • it is not obvious that when writing to the file, it gets all the way to the other side. Insight: test for reasons to drop content
    • it is not obvious that on the other side, the information is shown in the right place. Insight: test for mapping what is sent to where it is received and shown
    • it is not obvious that what gets sent gets to the other side in the same format. Insight: test for conversions, e.g. character sets and number precision
    • it is not obvious that if info is right on one case, it isn't hardcoded for that 1st case. Insight: test for values changing (or talk to the dev or read the code)
    It took me a day to figure this out (and get the issues fixed) without me implementing any test automation. For automation, this would be a mix of local file verification (catching the file sent on a mock server because manually I can turn off network to keep my files, our automation needs the connection and thus a workaround), bunch of web APIs and a web GUI.


    So I look at my list of insights and think: which of these would even be worth repeating? And which of these require the "system" for repeating them and which could just as well be cared for on "unit" perspective. Rather straighforward mapping architecture, yet many components in the scenario. Unlikely to change much but to be extended to some degree. What automation would be useful then if we did not get use of it as we were creating the feature in the first place? 

    And again I think there is an overemphasis on system level test automation in the air. Many of these I would recognize from the code if they broke again. Pretty much all but the first. We test too much and review / discuss / collaborate too little.
     
    Can I just say it: I wish we were mob programming.





    Tuesday, January 24, 2017

    Frustrations on system test automation efforts

    For a tester who would rather spend her time not automating (just because there's so much more!), I spend a lot of time thinking about test automation. So let's be clear: people who choose not to spend their days in the details of the code might still have relevant understanding on how the details of the code would become better. And in the domain of testing, I'm a domain expert, I can tell the scopes in which assumptions can be tested in to a level I would almost believe them.

    Back in my earlier days, I was really frustrated with companies turning great testers into bad test automation developers (that happens, a lot!) and these days, I'm really frustrated with companies turning great testers away and rather hiring test automation developers. Closing one's eyes on what is the multitude of feedback that you might want while developing makes automation easier - yet just not quite the testing one may be imagining. One thing has changed from my earlier days: I no longer think of bad test automation developers as the end of those people, as long as they start treating themselves like programmers and growing in that field. It's more of a career change, benefiting from the old domain knowledge. I still might question, based on my samples, the goodness of domain knowledge of testing on many of the people I've seen make that transition. Becoming a really good exploratory tester is a long road, and often people make the switches rather sooner than later.

    Recently, I've been frustrated with test automation specialists with a testing background, who automate in the system / user perspective and refuse to consider that while this is a relevant viewpoint, a less brittle one might involve addressing things in a smaller, more technology-oriented perspective. That unit tests are actually full-fledged tests as an option to keep track of things that should work. That it is ok to test a connected system with a fake connection. And that it just doesn't need to be, when automation is on the plate, a simulation of what a real user would do. Granularity - knowing just what broke is more relevant.

    I believe we run our test automation because things change, and as a long-time tester, I care deeply what changed. I recognize the changes that my lovely developers do, and I have brilliant ways of being alerted both by them with a lot of beautiful contextualized discussions but also just seeing from tools what they committed. I read commit comments, I read names of changed files and their locations, and I read code. I recognize changes coming in to our environment from 3rd party components we are in control of and I recognize changes into the environments that we can't really control in any way.

    And while our system test automation works against all sources of changes, I prefer to think my lovely developers over my users with the test automation giving feedback. The feedback should be, for the developers, timely and individualized to the change they just introduced. A lot of times I see system test automation where any manual tester does the timely and individualized better than the system created for this purpose.

    Things fail for a reason. Build your tests granular to isolate those reasons. 

    Saturday, January 14, 2017

    Thinking in Scopes

    The system I'm testing these days is very much a multi-team effort and as an exploratory tester looking particularly into how well our Windows Client works, I find myself often in between all of these teams. I don't really care if works as designed on my component, if the other components are out of synch failing to provide end users the value that was expected. 

    Working in this, I've started to experience that my stance is more of a rare one. It would appear that most people look very much at the components they are creating, the features they assign to those components and the dependencies upstream or downstream that they recognize. But exploring is all about discovering things I don't necessary recognize, so confirming and feature focus won't really work for me. 

    To cope with a big multi-team system, I place my main focus on the two end points that users see. There is a web GUI for management purposes, and there's a local windows client. And a lot of things in between, depending on what functionality I have in mind. As an exploratory tester, while I care most for the end-to-end experience, I also care in ways I can make things fail faster with all the components on the way and I have control over all the pieces in between. 

    I find that the decomposition of things into pieces while caring for the whole chain may not be as common as I'd like it to be amongst my peers. And in particular, amongst my peers who have chosen to pay attention to test automation, from a manual system tester background.

    Like me, they care for end to end, but whatever they do, they want to do in means of automation. They build hugely complicated scripts to do very basic things on the client, and are inclined to build hugely complicated scripts to do very basic things on the web ui - a true end-to-end, automated. 

    There's this almost funny thing for automation that while I'm happy to find problems exploring and then pinpoint them into the right piece, I feel the automation fails if it can't do a better job at pinpointing where the problem is in the first place. It's not just a replacement of what could be done manually while testing, it's also a replacement for the work to do after it fails. Granularity matters. 

    For automation purposes, decomposing the system into smaller chains responsible for particular functionality gets more important. 

    I drew a picture of my puzzle.


    Number 6 is true end-to-end: doing something on a windows client 'like a user', and verifying things on the the web guide 'like a user'. Right now I'm thinking we should have no automated tests in this scope.

    Number 1 is almost end to end, because the Web GUI is very thin. Doing something on the windows client and verifying on the same rest services that serve the GUI. This is my team's system automation favored perspective, to an extent that I'm still struggling to introduce any other scopes. When these fails (and that is often), we talk about figuring out in the scope of about 10 teams. 

    Number 2 is the backend system ownership team's favored testing scope. Simulating the windows client by pushing in the simulated messages in from one REST API and seeing them come out transformed from another REST API. It gives a wide variety of control through simulating all the weird things the client might be sending. 

    Number 5 is something the backend system ownership team has had in the past. It takes REST API as a point of entry simulating the windows client, but verifying end user perspective with the Web GUI. We're actively lowering the number of these tests, as experimenting with them shows they tend to find same problems as REST to REST but be significantly slower and more brittle. 

    I'm trying hard right now to introduce scopes 3 and 4. Scope 3 would include tests that verify what ever the windows client is generating against what ever the backend system ownership team is expecting as per their simulated data. Scope 4 would be system testing just on the windows system. 

    The scopes were always there. They are relevant when exploring. They are just as relevant (if not more relevant) when automating. 

    The preference to the whole system scope is puzzling me. I think it is learned in the years as "manual system tester" later turned into "system test automation specialist". Decomposing requires deeper understanding of what and how gets built. But it creates a lot better automation. 

    Telling me there are unit tests, integration tests and system tests just isn't helpful. We need the scopes. Thinking in scopes is important. 




    Saturday, January 7, 2017

    Why setting out to automate tests is a bad idea

    On Thursday at work, a colleague was doing a presentation I had invited, on how they've been automating their tests. Organizing sharing sessions comes naturally, both from me being curious and knowing where to find all the best stories, but also from creating an atmosphere of sharing and learning.

    As his story is starting, he tells us he needs to explain a few things first. He spends maybe 30 seconds on explaining why finding a way to automate was so needed (malware evolves fast and when you're responding to something like that, you will need to evolve fast too). But then, he spends 20 minutes talking about things most people in the room, identifying as quality engineers, have never done. He speaks of recognizing problems with being able to test, and finding the best possible programmatic solution.

    He talked on how they introduced blue-red deployments within the product (without even knowing it was a thing outside windows client software) and how that solved all sorts of problems with files being locked. He shared how they changed, bit by bit, the technical designs so that the whole installation is rebootless because it was just hard to automate stuff that would need to continue after reboot. Example by example, his story emerges: to automate testing, they needed to fix testability. And that just adding tests when you have big problems that are hard to go around when you can change the product makes little sense.

    The story makes it clear: to be effective in this style of testing, you should be able to program outside of the tests you're programming, and if you can't, team up with someone who can. Without the view of solving problems programmatically where they make the most sense (design vs. tests), you would be on a path to difficulties.

    For a room for of test automators who barely look into the application code, his message may have been intimidating. Setting out to automate test (as in this is what I want to test, designs don't change) is often an invitation to trouble.

    Make it first simple to test, then a simple test to test it. The first is much harder. And I find that most of the repurposed manual testers becoming test automators without caring for product structures to make "manual" testing easier are hitting this trap harder than exploratory testers who have been working with the friends with pickup trucks (programmers) all along.