Sunday, September 19, 2021

There are Plenty of Ways to Talk about Exploratory Testing

Out of all things Ministry of Testing, I am part of a very small corner. I speak at the independent meetups even when they fly under that flag, I speak at their sessions if they invite me (which they don't), and I am a silent lurker on a single channel, exploratory-testing, on their slack. Lurking provided me a piece of inspiration today, an article from Jamaal Todd

Jamaal asked LinkedIn and Reddit about Exploratory Testing to learn that less than 10 % of his respondents don't do exploratory testing, 50-70% give it a resounding yes and there's a fair portion of people who find it at least sometimes worth doing out of the 400 people taking time to click on the polls. 

What Jamaal's article taught me is that a lot of people recognize it as something of value, and it surprised me a little bit. After all as we can see in the responses by one of the terrible twins of testing in the slack, they are doing a lot of communication around the idea that exploratory testing does not exist.

It exists, it is doing well, and we have plenty of ways to talk about it which can be really confusing. 

When I talk of exploratory testing, I frame my things as Contemporary Exploratory Testing. The main difference in how I talk about it is that it includes test automation. Your automated tests call you to explore when they fail, and they are a platform from which you can add power to your exploration. Some of them even do part of the exploration for you.

Not everyone thinks of exploratory testing this way. The testing communities tried labeling different ideas a few decades ago with "schools of testing", and we are still hurting from that effort. When the person assigning others labels does so to promote their own true way, the other labels appear dismissive. "Context-driven" sounds active, intentional. But "Factory" is offensive. 

One of the many things I have learned on programming that naming is one of the hard problems. And a lot of times we try to think in terms of nailing the name early on, whereas we can go about and rename things as we learn their true essence. Instead of stopping to think about the best name, start with some name, and refactor. 

So, we have plenty of ways to talk about exploratory testing, what are they? 

  1. Contemporary Exploratory testing. It includes automation. Programming is non-negotiable but programming is created from smart people. The reason we don't automate (yet) is that we are ramping up skills. 
  2. 3.0 Exploratory testing. It does not exist. There is only testing. And the non-exploratory part we want to call 'checking', mostly to manage the concern that people don't think well while they focus on automation. Also known as RST - Rapid Software Testing. It is all exploratory testing.  
  3. Technique Exploratory Testing. We think of all things we have names for, and yet there is something in testing that we need to do to find the bugs everything else misses. That we call exploratory testing. Managing this technique by sessions of freedom is a convenient way to package the technique. 
  4. Manual Exploratory Testing. It's what testers who don't code do. Essential part of defining it is that automation is not it, usually motivated by testers who have their hands full of valuable work already now without automation. 
  5. Session-based Exploratory Testing. Without management with sessions exploratory testing isn't disciplined and structured. Focus on planning frequently what we use time on and ensure there is enough documentation to satisfy the organization's documentation needs we aren't allowed to renegotiate. 

Lets start with these. Every piece of writing there is on exploratory testing falls into one of these beliefs. The thing is, all that writing is useful and worth reading. It's not about one of these being better, but about you being able to make sense in the idea that NOTHING is one thing when people are involved. 

I invite you all to join the conversation on all kinds of exploratory testing at Exploratory Testing Slack. Link is available with main page of Exploratory Testing Academy


 

Friday, September 10, 2021

The Power of Three Plans

This week, I have taken the inspiration from discussions at FroGSConf last week, and worked on my test plans. And today, I am ready to share that instead of creating one plan, I created three - I call this the power of three. Very often different things will serve you best for what you are trying to plan for, and for the things I wanted, I couldn't do with one. 

The 1A4 Master Test Plan

The first plan I created was a master test plan. Where I work, we have this fairly elaborate template from the times when projects were large and not done in an agile fashion. That plan format overemphasized thinking of details prematurely, but has good ideas behind it, like understanding the relationship of different kinds of testing we will or will not do. 

Analyzing it, I could condense the relevant part of the plans into one A4 with many things that are specific to the fact that we are building embedded systems. 

While I don't think I wrote down anything so insightful into the plan that I could not share the actual plan I created for my project, I opt on the safer side. We did not tick some of these boxes, and with this one-glimpse plan we could see which we did not tick, and had conversation on one of them we should be ticking even if we didn't plan to. 

You can see my plan format has five segments:
  • Quality Target describes the general ideas about the product that drive our idea of how we think around quality for this product. 
  • Software Unit & Integration Testing describes the developer-oriented quality practices we choose from generally. 
  • Hardware Testing addresses the fact that there is a significant overlap and dependency we should have conversations on between hardware and system testing. 
  • System Testing looks at integrated features running on realistic hardware, becoming incrementally more complete in agile projects. 
  • Production Testing addresses the perspectives of hardware being individuals with particular failure modes, and something in assembly of a system, we have customer-specific perspectives to have conversations on. 
For us, different people do these different tests but good testing is done through better relationships between the groups, and establishing trust across quality practices. The conversations leading up to a plan have taken me months, and the plan serves more as a document of what I facilitate than a guideline of how different people will end up dealing with their interdependent responsibilities. 

We could talk about how I came up with the boxes to tick - through workshops of concepts people have in the company and creating structure into the many opinions. A visual workshop wins over writing a plan but we could talk about those in another post later. 

The System Test Strategy

The second plan I created was inspired by the fact that we are struggling with focus. We have a lot of detail, and while I am seeing a structure within the detail, I did not like my attempts of how I wrote it down before. On the course I teach for Exploratory Testing Academy, I have created a super straightforward way of doing a strategy by answering three questions and I posted a sample strategy from the course on twitter. 
I realized I had not written one like this for the project I work in, and got to work and answered those questions. This particular one I even shared with my team, and while I only got comments from one, at their perception was that it was shining light on important risks and reactions in testing. 

In hindsight, my motivation for writing this was two fold. I was thinking of what I would say to the new tester on the ideas that guide our testing as they start in a week, and I was thinking what would help me in pruning out the actions that aren't supporting us in a tight schedule we have ahead of us. 

This plan is actually useful to me in targeting the testing I do, and it might help with some in-team conversations. I accept that no document or written text ever clears it all, but it can serve as an anchor for some group learning. 

The Test Environments Plan

The third plan I produced is a plan of hardware and connections into test environments. If there is one thing that does not move very agile fashion, it is that of getting hardware. And I am not worried of the hardware we build ourselves, but on the mere fact that we ordered 12 mini-PCs off the self type in May, and we expect currently to receive them in December. There's many things in this space that if you don't design in advance, you won't have when you need it. The hardware and systems are particularly tricky in my team with embedded software, since we each have our own environment, we have many shared environments and some environments we want to ensure have little to no need of rebooting to run particular kinds of tests on. 

So I planned the environments. I drew two conceptual schematics of end-to-end environments with necessary connections, separated purposes to different environments, addressed the fact that these environments are for a team of 16 people in my immediate vicinity, and for hundreds of us over the end to end scenario. 

It was not the first time I planned environments, and the main aspects I did this week on this plan is ensuring we have what we need for new hires and new features coming in Fall '21, and that we would have better capabilities in helping project discuss cost and schedule implications to not having some of what we would need. 

The Combination

So I have three plans: 
  • The 1A4 master test plan
  • The system test strategy
  • The test environment plan
For now I think this is what I can work with. And it is sufficient to combine them into just links to each of the three. Smaller chunks are more digestible, and the audiences differ. The first is for everyone testing for a subsystem. The second is for system testing in the subsystem, in integration with other subsystems. The third is for the subsystem to be reused by the other teams this subsystem integrates with. 

I don't expect I need to update any of these plans every agile iteration we do, but the ideas will evolve while they might stand the test of time for next six months. We will see. 


Sunday, September 5, 2021

Test Plans and Templates

Imagine being assigned responsible for helping all projects and products in your organization being started on a good foot in testing. There's enough of them that you just can't imagine being there for all of them to teach them the necessary skills. You've seen a good few being lost on testing, miss out on availability of test environments and data, and projects being delayed. You want to help, give some sort of a guideline.

The usual answer to this is creating a template of some sort, to guide people through important considerations through documenting their thinking. When they document their thinking, others have a chance of saying what is missing. 

If it sounds so lucrative, why is it that these plans often fail? People don't fill in the template - finding little value in it. People fill in the template - the testing done does not match the plan; people don't read the text. And you really can't create skill of thoughtful thinking over a template. 

Yesterday at #frogsconf (Friends of Good Software), one of the conversations was on the test plans and templates we create. As I saw others examples and showed mine, I hated every single document that I had written. The documents are not the conversation that precedes good work. The documents create a shallow bind between the reader and the document, and true co-ownership would require ensemble writing of the document so that it's ours, not mine. And instead of the many words, we'd be better off with filtering the core words out that will lead our efforts. 

My strategy for test automation nowadays distills into one sentence: if we don't change anything, nothing changes for better. 

The less words we write, the more likely we are to get people to read them, hear them, internalize them and use them to guide their work. 

To create a plan, a better approach might be a visual whiteboard with as few sections to fill as possible. Allow them to find their words and concepts to explain how they will do the work. 

I shared an example from the course I have been creating, an example I have experienced to direct students into better testing the application. The problem is, I needed to do the work of testing the entire application to be able to write that document, and that is not what we can expect with projects. 

I have a few plans I need to do next week, so now is my time to think what will those plans look like. 

Tuesday, August 31, 2021

Stray Testers Unite!

 I have been observing a phenomenon: there are stray testers out there. 

It is unfortunately uncommon that testers find themselves wondering at large or being lost in what it looks like to do a good job at testing. 

For one tester, being stray manifests as them waiting for a project to truly start. There's architecture, there's requirements, there's conversations, there's strategies, there's agreements, there's team building and all that, but whenever there's testing, the early prototypes often feel better left for developers to test. And there is only so much one can support, lead and strategize. 

For another tester, being stray manifests as them following so many teams that they no longer have time for hands on work. They didn't intend to be lost in coordination and management, but with others then relying on them knowing and summarizing, it comes naturally. They make the moves but find no joy. 

For a third tester, being stray manifests as not knowing where to even start and where to head to. With many things unknown to a new professional and little support available, trying to fulfill vague and conflicting expectations about use of time and results leaves them wondering around. 

In many ways, I find we testers are mostly strays these days. We consider how *we* speak to developers but don't see developers putting the same emphasis on the mutual relationship. We navigate the increasingly complex team and system differences, figuring out the task of "find (some of the) bug that we otherwise missed". We have expectations of little time used, many bugs found, everything documented in automation while creating nice lists of bug reports in a timely manner. The ratio of our kind is down, driven to zero by making some of us assume new 'developer' identities. Finding out tribe is increasingly difficult, and requires looking outside the organization to feel less alone and different. 

Communities are important. Connections are important. Caring across company boundaries is important. But in addition to that, I call for the companies to do their share and create spaces where testers grow and thrive. We need better support in the skills and career growth in this space. We need time of our peers for them helping us and us learning together. We need the space to learn, and the expectation and support from our teams in doing that. 

Make sure you talk to your colleagues in the company. Make sure you talk to your colleagues in other companies. It will help us all. Stray testers need to unite. 

Tuesday, August 24, 2021

Social Media, a Practitioner Perspective

Someone at the company I work with invited me to share my experiences on being active on social media about work-adjacent topics, particularly with that they framed as *thought leadership on LinkedIn*. In preparing to share, I did what I always do. I opened a page in my notebook, and started scribbling a model of things I would like to share on. And since I did that step and shared internally, I realized writing it down would be necessary, to notice a slow change of mind over time. 

Where I'm at With Social Media

My story starts with the end - where I am today. 
  • 4100 connections on LinkedIn
  • 7221 followers on Twitter
  • 754 739 blog views over 775 blog posts
  • 450 conference sessions
I didn't set out to do any of this. It all grew one thing at a time. And I don't consider it a significant time investment, it merely reflects doing many little things over and over again over many many years. 

Why It Matters, Though

I recount the story of how I got my latest job. A tweet I shared announcing I'm ready to move, lovely people reaching out in significant numbers discussing opportunities, turning into creating a position together that I would accept. This was my second position built like this, with even better experience than before - I met people both on hands-on side and management before making the mutual commitment to take quality and testing forward together. This was my second position found like this, where I would not have found the job I thoroughly enjoyed without the network. This one was both found with network, and built with network. 

If I didn't have my connections, this would not be possible. 

Traversing the Timeline

Drafting over an electronic whiteboard, I drew a timeline with some of the core events. 
  • 2009 I wrote my first blog post. 
  • 2010 I joined Twitter.
  • 2015 I realized LinkedIn was not for people I know and had met but all professional connections I wanted to uphold.   
  • 2020 I started posting on LinkedIn. 
My history with social media was not that long. And while it may be now strategic, it did not start off that way. 

My whole presence of social media is a continuation of work with non-profits and communities I started in 1993. At first being one of the very few women in tech at Helsinki University of Technology turned me into a serial volunteer. Later this background made me volunteer 2001 - 2011 to Finnish Association for Software Testing. Finally I founded Software Testing Finland ry in 2014. 

Non-profits and communities were important to me for the very same reason social media is now important to me. 
I am a social learner who fast-tracks learning and impacts with networks of people. I show up to share and learn, to find my people and contents that help me forward in understanding.  
I started speaking in conferences to get over paralyzing fear of stages. 
I soon learned that best way to attract good learning conversations was to share what I was excited on, from stage. 
I started blogging to take notes of my learning. 
I later learned that traveling to stages is limiting, when your blog can be be a source of those connections too. 

My Content Approach

If my thinking takes a page or more to explain, I write a blog. 
If I have a thing to make a note of that I can say in public and it can be summarized shortly, I tweet it.
If it is too long to be a single tweet and I want to refrain from tweet storming a series of tweets, I take it to LinkedIn.
If I am seriously thinking it is good content that should be relevant for years, I publish it in one of the magazines or high traffic blog sites known for good content. 
If it can't be public, I have private support groups both inside the company and outside to discuss it. 

I don't have schedule. I have no promises on where I post and when. It emerges as I follow my energy and need of structuring my thoughts. 

Making Time

My most controversial advice is probably around how to make time. 

I have two main principles on how I make time:
  1. No lists. When urge of writing it down to a list hits me, I can just write it to a proper channel. Time to lists is time away from publishing the finished piece.
  2. Write only. I use all of the public channels as write only media. I very often mute the conversations, and if I have time and energy, go back to seeing what is going on. Sometimes I mute on the first annoying comment. And I have taken coaching to find that peace in not arguing and explaining, but expecting people showing up on my mentions for a conversation to approach with curiosity and acceptance that I am not always available. 
I read things, but on my terms and schedule. I read what others write for a limited amount of time and what I see is based on luck and some of the algorithms making choices for me. I read comments and make active choices of when to engage. 

Social media is not my work. I have a testing job with major improvement responsibilities to run. Social media is a side thing. 

Deciding on What You Can Say

Finally, we talk about what we can say. I have carefully read and thought about the social media guidelines my employers have, and seek to understand the intent. My thinking on what to say is framed around professional integrity and caring for people and purpose. Professional integrity means for me that I can discuss struggles and challenges, as long as I feel part of the solutions we are working on. Caring for people means I recognize many of my colleagues read what I write and recognize themselves even in writing I did not think are about them but general challenges that many people recognize. Caring for purpose means thinking about how to do no harm while maintaining integrity. 

We all choose a different profile we are comfortable projecting. What you see me write may appear unfiltered, but I guarantee it is not. 

The impacts of sharing openly are varied. Sometimes knowing what thoughts I am working through is an invitation to people to join forces and see perspectives.  Most often people trust my true enthusiasm on solving each and every puzzle, including ones involving compromises. Sometimes I have offended people.  I've appreciated the one who opened a conversation on the feelings my reflections raised as some of my wishes seemed impossible at a time.

I also remember well how one of my blog posts caused a bit of a discussion in my previous place of work. I still maintain my stance: it was a blog post about how test automation very often fails when done on side to reach the value in projects, a problem I have lived through multiple times. But it was written at a time when someone felt that sting of failure on their technical success was too much to handle. 

I apologize when I hurt people, and I don't apologize for their feelings being hurt, but I work to understand what it was that I do. Apologizing comes with a change I will try to make. 

Final Words

When we were discussing me joining my current organization, my recruiting manager called to an old manager of mine from over 10 years ago. The endorsement came with a realistic warning: Maaret is active on social media, and you want to be aware of that. 

I was hired regardless, and love my work. It is always ups and downs, and being visible/vocal holds power I try to use with respect. 

The platform is part of building a career instead of holding a job. Be who you want to be on social media so that it supports your career. My version is just my version, and yours should look like you. 

Wednesday, August 18, 2021

Future of Testing and Last Five Years?

This morning opened up with Aleksis Tulonen reminding a group of people that he asked us five years ago a question on the future of testing in five years. 

I had no idea what I might have said five years ago, but comparing to what I am willing to say today, the likelihood of saying something safe is high. 

So, what changed in five years? 

  • Modern Agile. I had not done things like No product owner, No estimates and No projects at the last checkpoint. I have done them now. And I have worked in organizations with scale. These rebel ideas have been in scale of the team I work in with relevant business around us. 
  • Frequent Releases. It may be that I learned to make that happen, but it is happening. I've gone through three organizations and five teams, and moved us to better testing through more frequently baselining quality in production with a release. And these are not all web applications with a server, but globally distributed personal computers and IoT devices are in my mix.  
  • Integrated engineering teams without a separate testing group. Testers are in teams with developers. They are still different but get along. Co-exist. Separate testing groups exist in fewer places than before. Developers at least help with testing and care on unit testing. You can expect unit testing. 
  • Exploratory includes automation. Profile of great testers changed into a combo of figuring out what to test and creating code that helps test that. The practice of "you can't automate without exploring. You can't explore (well) without automating." became day to day practice in my projects. 
  • BDD talk. BDD became a common storyline and I managed to avoid all practical good uses. I tried different parts of it but didn't get it to stick. But we stopped using the other words as commonly - specification by example and acceptance test driven development lost the battles.  
  • Ensemble Testing and Programming. It moved from something done at Hunter to something done in many but still rare places. I made it core to my teaching and facilitating exploratory testing in scale at organizations I work at. And it got renamed after all the arguments on how awful 'mobbing' sounds. The term isn't yet won over completely but it has traction. 
  • Testing Skill Atrophy. Finding people 'like me' is next to impossible. Senior people don't want to do testing, only coach and lead testing or create automation. Senior testers have become product owners or developers or quality coaches but rarely stay in hands-on testing. We are more siloed within testing than we were before. And finding "a tester" can mean so many things that recruiting is much harder these days. 
  • Developers as Exploratory Testers. Developers started testing and in addition to small increments - test after cycles taking us to good level of unit testing without TDD, developers started driving and contributing in exploratory testing on different scopes of the system. They were given the permission to do 'overlapping' work and run further than testers got in the same timeframe. 
  • Test Automation University. Test automation became the go-to source for new testers to learn stuff around. Test Automation University, headmastered by Angie Jones and sponsored by Applitools, became a bazaar for materials on many different tools. 
  • Open-Source to Fame. Everyone has their own tool or framework or library. Everyone thinks they are better than others. Very few really know the competition, and marketing and community building is more likely to lead to fame. Starting something became more important than contributing to something. 
  • Browser Driver Polite Wars. Options to Selenium emerged. Selenium became standard and even more options emerged. People did a lot of browser testing and Cypress made it to JS-world radar for real. Playwright started but is in the early buzz. Despite options that are impossible to grasp to so many people in development efforts (there's other stuff to focus on too!) people mostly remained respectful. 
  • Dabbles of ML. First dabbles into machine learning in testing space emerged. This space is dominated by commercial, not open source. And programming was "automated" with Github Copilot that translates well-formulated intentions as comments into code machine learned from someone else. Applications with machine learning became fairly commonly available and bug fixing for those systems became different. 
  • Diversified information. There are more sources for information than ever before, but it is also harder to find. Dev.to and self-hosted blogs are the new go-to, in addition to written content video and voice content has become widely available. Difficult part is to figure out what content makes sense to give time to and we've seen the rise of newsletter aggregators in testing field. 
  • One Community is No More. Some communities have become commercial and in many ways, resemble now tool vendors. Come to our site, buy our thing - paywalls are a thing of the day. At the same time, new sources have emerged. There is no "testing community", there are tens of testing communities. Developer communities have become open to testing and choosing to hang out with people in the language you work in has become something testers opt in for more.
  • Twitter Blocks Afoot! While visible communication is more civil and less argumentative than before, people block people with a light touch. If blocking someone five years ago was an insult, now it is considered less of an insult and more of a statement of curating the things you end up reacting to in the world.
  • Women in Testing. The unofficial slack community grew and became a globally connected group of professionals. Safe space with people like me enabled me to start paying attention to contents of men again and saved many people from feeling alone and isolated in challenging situations. The community shows up at conferences as group photos.
  • DevOps. It is everywhere. The tools of it. The culture of it. The idea that we pay attention to 'testing' in production (synthetic tests and telemetry). 'Agile' became the facilitator wasteland and developers of different specialties grouped their agile here. 
  • Cloud. Products went to cloud first. Supporting tools followed in suite. And the world became cloud-native for many corners of the software development world.
  • Mobile & API. These became the norm. REST APIs (or gRPC) in the IDE is the new UI testing. Mobile has separate language implementations for presentation layers and forced us to split time on Web / Mobile. 
  • Crowdsourcing. It remains but did not commoditize testing a lot. I find it almost surprising, and find this to be a hope for a better future where testing is not paying user to hang out with our applications to pay them peanuts for time and bigger peanuts if they were lucky to see a bug. 
I most likely forgot a trend I should have named, but the list of reflections is already long. But back to what I predicted. 
I don’t think much will change yet in 1, 3 or 5 years, other than that our approaches continue to diversify: some companies will embrace the devops-style development and continuous testing, while others figure ways for releasing in small batches that get tested, and others do larger and larger batches. Esp. in the bespoke software sector, the forces of how to make money are so much in favor of waterfall that it takes decades for that to change.

But if there is a trend I’d like to see in testing, it’s towards the assumption of skill in testing. Kind of like this:

- black-hat hackers are people who learn to exploit software for malice.

- white-hat hackers are double agents who exploit software for malice and then give their information to the companies for good

- exploratory testers are white-hat hackers that exploit software for any reason and give that information for companies. From compromise to more dimensions like “hard to use”, “doesn’t work as intended”, “annoys users”.

- because exploratory testers are more generalized version of hackers, exploratory testing will go away at the same time as software hackers go away. You get AI that write programs, but will need exploratory testers to figure out if programs that are produced are what you want.

I don't think I see my hope of "assumption of skill in testing". I see better programmers who can test and a few good testers. Being a developer with testing specialty is one of the entry level roles and everyone is expected to pick up some programming. Acceptance testing by non-programmers remains, is lower paid and has different time use profile - as much time, but in small deliveries and better quality to begin with. 

My bets on AI will come for the programmer jobs barely fit the 5-year window, but from where we got now, I wouldn't say I am wrong on it. Then again, testers became programmers and we started understanding that programmers don't write code 7.5 hours a day. 

Next five years? Will see how that goes. 



Conflicted in Bug Reporting

In 1999 I was working on writing a research paper on bug reporting systems. I was motivated by the fact that the very first project I started my career on (Microsoft) did the bug reporting tooling so well compared to anything and everything I had seen since and their tool was not made available for the public. It still isn't. 

With a lot of digging into published material, I found one thesis work that included public descriptions of what they had, but it was limited. Thus I read through a vast amount of literature on the topic otherwise and learned bug classification was academia's pet. 

I never finished the research paper, but everything I learned in the process has paved my way to figuring things our in projects ever since. And with another 22 years into really looking at it and thinking of it, I have come to disagree with my past self. 

The current target state for bug reporting that I aspire to lead teams into is based on a simple principle:
Fix and Forget. 
I want us to be in a state where bug reporting (and backlogs in general) are no longer necessary. I want us to have so few bugs that we don't need to track them. When we see a bug, we fix the bug and improve our test automation and other ways of building products across various layers enough to believe that if the problem would re-emerge, we'd catch it. 

With a lot of thought and practice, I have become pretty good at getting to places where we have zero bugs on our backlogs, and as new ones emerge, we invest in the fix over the endless rounds of prioritizing and wasted effort that tracking and prioritizing creates. 

At this point of my career, I consider the efforts to classify organizations internally discovered bugs something we should not do. We don't need the report, we need the fix. We don't need the classification, we need the retrospective actions that allow us to forget the detail while remembering the competence, approach and tooling we put in place. 

At this point of my career, the ideas that I used to promote in 1999 with bug reporting database and detailed classifications for "improvement metrics" I would consider a leap back in time and a reason of choosing something else to do with the little time I have available and under my control. 

I think in terms of opportunity cost - it is not sufficient that something can be done and is somewhat useful when done. I search for choosing the work to do that includes ideas about other work that could be done in the same time if this was not done.

Instead of reporting, reading and prioritizing a bug, we could fix the bug.
Instead of clarifying priorities by a committee, the committee could teach developers to make priority decisions on fixes.
Instead of reporting bugs at all from internal work, introduce "unfinished work" for internal bugs 
Instead of expecting large numbers of bugs to manage, close every bug within days or weeks with a fix or decision. 
Instead of warehousing bugs, decide and act on your decisions.