A significant part of my work is to explore how testing is done where I work, in scale, to figure out what I should help with, what I should focus on, and what might even be going on. And it is not like that would be an easy task.
Talking with people in scale is difficult. I can do questionnaires in scale, but there's only so many meaningful conversations I can have in a day of work. But in the last year, I have started to be able to give shape to describing those conversations with numbers, knowing my monthly people I connect with is about 60 people, where 30 are constant over the project / team I focus on, and other 30 are people I connect with some month, having other group of 30 another month.
From conversations I have had, I have found out where people keep their artifacts, and sampled them. I've had conversations of comparing their artifacts to the things they tell, and come to the conclusion that pulling help actively is a hard thing to do, because help people would know to pull is limited to what they know they know.
In addition to sampling their artifacts, I have counted them. And last Friday I showed to a group of peers numbers around counting changes (pull requests) to a particular artifact (test automation systems code) for a conversation, getting bit of a conversation I did not expect.
Personally I look at the quantitative side of pull requests as as invitation to explore. A small number, a number different than what I would expect and large number all require me to ask what happens behind that number. I am well aware that pull requests to test automation represent only a part of the work we do, and that I could create a higher number artificially splitting the changes. But what a number tells me is that nothing will change if we don't change anything. A number of test automation pull requests in relation to pull requests to the application that automation tests tells me a little bit about how we work on things that go together (app and its test), and number of people contributing to code bases tells me a little bit on how tightly specialized maintaining test code bases is. There's not a number I expect and target, it is a description of what it turned out to be.
If I ask for a number, or go get a number, I find the idea of "you must tell me what exactly what question you are trying to answer" peculiar for someone who is exploring. My questions are absolutes, but probes. Exploring, like with failing test automation, calls me to dig in deeper. It is not the end result, it is a step on my way.
Distrust in numbers runs deep. And while I decided to be ok trusting managers with numbers, I have been learning that the step that was difficult for me is even more difficult for the others. So it's time to make it less special, and normalize the fact that numbers exist. Interpretations exist. And conversations exist, even ones I would like to not have because they derail me from what I would like to see happen.