AI as a Co-Pilot, Not a Gatekeeper
I've been using AI tools in my product work for over 2 years now. They're extraordinarily good at certain tasks: summarizing customer feedback, generating variations on an idea, drafting documentation, spotting patterns in data, etc. They save me hours every week.
But there's a dangerous temptation to let them do more than they should.
AI is excellent at amplifying patterns it's seen before. If you give it a hundred pieces of customer feedback, it can categorize them, count them, and tell you which themes appear most frequently. That's useful. But frequency isn't the same as importance. The most critical insight might come from the one customer who mentioned something nobody else did—the early signal that's easy to dismiss as an outlier.
AI optimizes for what's measurable, not what's meaningful. It can tell you that 60% of users mentioned "speed" in their feedback, but it can't tell you whether they mean page load time, time to value, or time to complete a task. You have to interpret that. And you have to understand the context.
I use AI to reduce friction in my work, not to make decisions. It helps me see more, but I decide where to look and what to look at closely.
Here's what that looks like in practice:
- When analyzing feedback, I ask AI to surface themes from different angles.. Then I read through the raw comments myself, looking for what doesn't fit the pattern. The outliers are often more interesting than the consensus.
- When brainstorming and researching, I use AI to generate more possibilities and identify patterns. But I don't trust its prioritization. It doesn't know our technical constraints, our strategic bets, or what we learned from the last three failed experiments. I use it to see more options, not to choose among them.
- When drafting documentation, I write the outline and I let AI create a first pass. Then I take over and go back and forth for clarity, readability, and factuality. I always own the final version. Documentation is often about making commitments, not just transferring information. That's something only I can take responsibility for, not the AI.
The boundary is clear: AI can help me understand what users are saying, but it can't tell me what they mean. It can suggest directions based on data, but it can't make strategic bets. It can draft artifacts, but it can't own accountability.
The moment you outsource judgment to an algorithm, you've outsourced your job. And you've probably made worse decisions, because AI has no skin in the game. It doesn't feel the consequences of being wrong.
Articles about UX, PM, and AI
Member discussion