I started off with (re)testing a small feature I had noted that didn't work in so many ways just before my vacation. Instead of us getting the feature finished and a feeling of mutual success on making a delivery, I was told that it would be fixed while I was gone. Well, some of it was fixed, but in general it still doesn't work. The users could not do valuable work with it. It doesn't work in real scenarios.
I talk with the developer to hear that the stuff I had reported before are "special cases" - relevant to the user, but all requiring logic he just hasn't yet implemented as he thought it would be special. I mention the new problems and they are deemed "weird behaviors" - we do agree. I'm concerned of the severity of the issues for the end user, blocking the delivery of added value. The developer seems to be concerned with the technical challenge of it, how could the code do things like that. I just know it does, from experience.
I ask if the developer tested the feature himself, if he actually uses it and he tells me he does. The blatant difference in our experience calls out for pairing to test it, to share the experience. With the new energy, I'm about to suggest that.
But, just before I suggest pairing (which isn't a normal way for us to work, sadly) the developer points out that he has started working on another task that will take him days to weeks and that he will not look into the "weird behaviors" before he's done with the other work. I remember again what really frustrates me - unfinished features waiting in inventory. Features that don't improve while they wait, features that get more broken when they are "fixed" after weeks of wait time. And the repeated experience of testing something that doesn't work for me, but presumably always works for the developer.
Waiting sucks. Finding easy bugs sucks. Not collaborating sucks. This isn't testing, this is starting testing again and again being blocked right in the beginning. Testing would be great. I hope I get to do more of that soon.