New tools rarely arrive complete. Especially in fast-moving AI programs where the platform team is iterating in parallel with the operational team depending on it. You push forward anyway, because the work doesn't wait.
We were mid-transition to a new annotation system when the gaps became apparent. Analytics we'd relied on - gone. Backend access our QA process depended on - unavailable. The previous system had given us visibility we'd taken for granted. The new one, still being built out, hadn't caught up yet.
Stakeholders were pushing queues. Work was moving. But we were operating partially blind.
I'm a tinkerer by nature. When something isn't working, my instinct is to read, dig, and follow the thread until I find a way through. So that's what I did - worked through the documentation, followed links, tested assumptions, until I'd mapped out a method to surface the data we needed from within the new system.
But I didn't go to my leadership team with a problem. I went with a solution I'd already stress-tested. I documented the method, walked the team through it on a call, and by the end everyone had what they needed to do their jobs.
That's something I learned early as a producer: figure out as much as you can before you take it upstairs. Nobody needs another problem on their plate. What they need is someone who shows up having already done the work of solving it.
In fast-moving AI programs, that instinct isn't a nicety - it's the difference between stalling and shipping.