AI is reshaping team dynamics and collaboration patterns. Code reviews and pair programming are...
Why product's pace can't match engineering's AI leap
You've probably seen the numbers by now. With the advent of advanced AI, engineering teams are experiencing unprecedented acceleration. Developers, empowered by tools like Copilot, complete coding tasks 55% faster. The 2025 DORA Report shows 90% global AI adoption among developers, with over 80% reporting enhanced productivity.
Yet product teams aren't experiencing the same acceleration—and the gap keeps widening.
This isn't about rebalancing headcount or adopting better tools. Product, design, and research face structural constraints that AI, in its current form, simply can't dissolve. Understanding these barriers is the first step toward moving differently in an AI-first world.
Understanding the barriers that slow down product teams
Imagine a product team, eager to leverage AI, only to find their progress gated by forces beyond their control. This is the reality for many:
External dependencies slow product validation
Unlike code, which can be generated and tested in a controlled environment, product validation often depends on the unpredictable rhythms of the real world.
A/B tests require sufficient user traffic and duration to yield statistically significant results—you can't "LLM your way" around the need for real-world data. Similarly, user research, even when automated, involves recruiting, scheduling, and conducting sessions, all of which introduce real-time delays.
The bottom line? Product validation hinges on users' availability and behavior, not just internal velocity.
Coordination creates unavoidable latency
The very nature of product development is inherently collaborative. As Conway's Law suggests, our product structures often mirror our team communication. The more stakeholders involved, the slower alignment becomes.
Brooks's Law also reminds us that simply adding more product managers doesn't necessarily accelerate progress; it can, in fact, increase communication overhead. Furthermore, approval bottlenecks—from compliance to legal and brand reviews—frequently fall to product teams. These processes, while essential, are known to slow delivery without always increasing quality.
Drafting a proposal might be fast, but gaining alignment and approvals is often a slow, human-centric process that AI cannot yet fully manage, as it doesn't remove decision rights or governance obligations.
AI's capabilities follow a “jagged frontier”
The "Jagged Frontier" study by Harvard and BCG offers a crucial insight: AI excels at routine tasks, but its performance can actually degrade when applied to novel, strategic work. While AI can significantly improve structured tasks like writing, it doesn't offer the same acceleration for complex, nuanced challenges.
Product managers can write faster, but AI, at its current stage, cannot prioritize or strategize faster because it doesn't yet fully grasp context and the intricate tradeoffs involved in strategic decision-making.
Risk management requires human oversight
The well-studied phenomenon of LLM "hallucinations" presents an unacceptable risk in regulated or sensitive contexts. In fields like health, finance, or legal, human review is not merely preferred; it's mandatory.
This inherent need for human oversight limits how quickly product teams can ship AI-assisted work. You simply cannot use AI to bypass critical risk management and compliance protocols.
Governance & compliance obligations persist despite AI
DORA's research consistently demonstrates that heavy approval processes can hinder performance. Yet, product teams frequently become the bottleneck because they are responsible for navigating external approvals related to privacy, security, and legal considerations.
These governance obligations remain a human-driven process, largely untouched by current AI capabilities.
Engineering benefits from higher automation potential
In contrast to product, engineering often benefits from a higher ceiling for automation gains. The Copilot RCT, for instance, showed a significant speedup in coding tasks. However, it's vital to remember that even with these impressive individual gains, system-level performance is ultimately capped unless other functions within the organization also improve.
The DORA 2025 report, while highlighting AI's positive impact on engineering productivity and code quality, also points to ongoing challenges in ensuring software quality before delivery and a "trust paradox" where AI outputs are useful but require human verification. This underscores the need for a holistic approach to improvement.
How product can reclaim its velocity in the age of AI
If product teams can’t simply “move faster,” they can—and must—move differently. The solution isn’t about keeping up with engineering’s pace; it’s about redesigning how product discovers, decides, and aligns in an AI-first world.
1. Redefine velocity around learning, not shipping
Product acceleration won’t come from collapsing build cycles, but from compressing feedback loops. Teams should shift their metric from “time to release” to “time to validated insight.”
AI can support this by accelerating synthesis—turning user research, usage data, and market signals into clearer, faster hypotheses.
2. Integrate AI where the bottlenecks live
AI shouldn’t just generate PRDs or competitive summaries—it should mediate coordination friction. For example, copilots can pre-draft stakeholder updates, summarize alignment threads, or identify recurring blockers in decision logs.
The win isn’t automation of deliverables but orchestration of decisions.
3. Simulate the slow loops
While you can’t skip real-world validation, you can model it. Synthetic A/B testing, agent-based simulations, and AI-driven persona testing can de-risk ideas before live deployment.
These tools don’t replace real data—they reduce the cost of learning while waiting for it.
4. Restructure governance around risk tiers
AI adoption in product often stalls not for lack of tools but because every decision is treated as high-risk. A tiered governance model creates headroom for speed without sacrificing oversight.
Automate and delegate low-risk validations. Escalate only the few decisions that matter.
5. Build decision memory, not just artifacts
AI excels at recall and reasoning across context. Capturing structured decision logs, assumptions, and tradeoffs allows future PMs—and AI copilots—to reason from institutional memory, not reinvent it.
This turns AI from a writing assistant into a thinking amplifier.
6. Design the product stack for embedded intelligence
Engineering got Copilot in their IDE. Product needs its equivalent inside Figma, Jira, and Confluence—embedded, contextual copilots that sit where decisions are made.
Until then, AI will remain an external “assistant” rather than an integrated force multiplier.
How to get started
The path forward isn't about matching engineering's pace—it's about redesigning how product discovers, decides, and aligns. Start by identifying which bottleneck most constrains your team's velocity, then apply AI where coordination friction lives rather than where deliverables are created.
If you need help identifying where to start for the greatest impact, contact our team or learn more about what we offer here.