I used AI to triage our Jira backlog. We closed 50% of our open bug tickets.

Bug backlogs grow fast.

Features get refactored. Teams file overlapping issues. Old tickets stick around long after the code has changed. Before long, you have hundreds of bugs that technically need attention, but nobody has time to manually sort through all of them.

So I tried using AI as a triage assistant.

I connected an AI assistant to Jira through MCP, which let it search, compare, and summarize tickets in bulk. The goal wasn’t to let AI make final decisions on its own. The goal was to find the tickets most likely to be duplicates, obsolete, or miscategorized so a human could review them quickly.

A few things worked especially well.

First: surfacing duplicates.

The biggest quick win was clustering tickets by description similarity, stack traces, error patterns, and affected areas.

A lot of issues that looked separate were actually the same bug reported by different people at different times. Instead of five tickets competing for attention, we could consolidate them into one stronger ticket with more context: better reproduction steps, more affected users, and a clearer picture of the real problem.

That means when an engineer finally picks it up, they’re starting from a much better place.

Second: grouping by feature area.

This helped in two ways.

It caught duplicates the first pass missed — especially tickets that described the same underlying issue in completely different language.

It also exposed hotspots. When fifteen bugs are clustered around one feature, that’s a signal. The backlog stops being a random pile of tickets and starts becoming a map of where the product or codebase needs attention.

Third: finding tickets tied to deprecated areas.

I searched for tickets that referenced deprecated features, removed flows, or libraries we were actively moving away from.

This surfaced a set of bugs that were no longer worth treating as normal open product issues. Some were tied to legacy functionality. Others referenced libraries or patterns we had already replaced, or were in the process of replacing.

Those tickets still needed review, but they were much easier to reason about once grouped together. Instead of asking, “Is this bug still valid?” one ticket at a time, we could evaluate them in the context of the larger migration or deprecation work.

A future improvement here would be having AI cross-reference tickets against the current codebase directly: checking whether mentioned APIs, files, feature flags, or libraries still exist. That would make it even easier to separate active bugs from stale backlog noise.

The whole process took an afternoon and cut our open bug count in half.

AI didn’t fix the bugs. But it helped remove the noise around them.

And honestly, that might be one of the most practical uses of AI in engineering right now: not replacing judgment, but making it easier to apply judgment at scale.

How are you keeping ticket debt under control?

Leave a Reply