I'm a tester by trade and heart, meaning that looking at a piece of code, I get my thrills on thinking how it will fail over how can I get it to work. Pairing with a developer who don't understand the difference can be an uncomfortable experience. Seeking weaknesses in something that exists is a bit of a different exercise than building up something.
Imagine an interview situation, going in with "Set up your IDE on language of choice ready to go". A tester pairing with a developer on a "programming skills tests". That setup alone is making me uncomfortable.
With an exercise out of the blue, the usual happens. The expectations in what we're about to do get muddled. They share a gist of numbers written in English as text. Working on code they start with asking for "signature" over explaining the problem. The usual troubles with pairing with someone new.
With an empty canvas, we write the first test selecting something from that list in gist.
Following red, the test won't pass without implementation. So you add implementation.
Expectations are about to get more muddled. While I know where this is going with TDD, this is not the thing I signed up for.
I know that I can build up the implementation from here. And that it shows how I would solve this puzzle. But my work starts from seeing one of the many ways the puzzle is solved, and I try to steer towards it (with little success, of course).
I ask to just say the problem in English. Knowing the IDE runs co-pilot, I am hoping to steer the conversation from designing code with TDD to testing code someone designed - critiquing the design that emerged, figuring out what values reveal information that we missed something, seeking limits of the solution.
The English comes off as
#function to convert integer to text
It's not particularly good English of the problem, and if I knew what the pair holds in their head, I could have improved the English to
Copilot is super-helpful giving multiple solutions to choose from, with ctrl+enter. Now we're talking testing!
The first option is next to hilarious.
You don't need to test beyond a glimpse of a review that this is not going to work out. Browsing further, you see more promising options.
Now I have something to test!
I would already have lovingly crafter tests for each of my branches if I had stuck stuck to the process of TDDing the solution out but that is not what I personally consider interesting. I'm a tester, and the test target is my external imagination. And I wouldn't hand-craft the examples, there's much more effective strategies for this.
I turn this into approvaltest of all values from zero to 999 - and beyond if I feel like it.
Now I can visually verify if I like the pattern I see.
The ... added for omission of a very long list beautifully doing what I was expecting.
I have worked with programmers for 25 years knowing that the solution could be different. It could be any of the other suggestions co-pilot gives me, and I - an exploratory tester extraordinaire, care for results matching what is necessary for the success of the business with the user and other stakeholders.
Another solution I could have ended up with is this one:
Which is nice and concise.
Comparing its output from the same test to the previous implementation, the difference is glaring:
I coul could have also have ended up with this:
And with the very same approach to testing, I would have learned that
And trust me, at worst this is what I could expect to be getting functionally. And with all this explanation, we did not get to talk about choices of algorithms, if performance matters, or if this can or even needs to be extended, or where (and why) would anyone care to implement such a thing for real use.
With co-pilot, I wouldn't have to read the given values from a file you gave me in the first place. That I did because I was sure it is not complicated after the interview and some of that work feel like adding new failure modes around file handling I would deal with when they exist.
Instead of us having a fun thing testing, we had different kind of fun. Because I still often fail in convincing developers, especially in interview situations where they are fitting me into their box, that what I do for work is different. And I can do it in many scales and with many programming languages. Because my work does not center on the language. It centers on the information.
Conclusion to this interview experience: a nice story for a blog post but not the life I want to live at work.