Omi Iyamu · Personal DossierVol. XVII · 2026 Edition
Omi Iyamu.
← All essays
2026 · 04 · 1211 min read

The Research → Product Gap

Before
22 mon
After
9 mon
06121824
months
Fig., Research → Product latency · Google Brain, 2017–2021

There is a very specific failure mode I have watched, up close, for a decade: a research team ships a paper, a demo, and a blog post. Product looks at it, asks two or three sharp questions, and nothing happens for eighteen months. Sometimes the idea resurfaces at a competitor. Sometimes it just dies.

At Google Brain, my job, before I had the title for it, was to close that gap. We went from 22 months of average research-to-product integration to nine. Not because we got smarter. Because we stopped treating research and product as two different departments that occasionally sent each other emails.

The playbook is unglamorous. Twelve weeks, three phases, four artifacts.

Phase one: shared problem framing. Two weeks. You do not start with a model. You start with a product surface that is bleeding. You write, in one page, the customer pain in the language the customer uses. Research and product co-sign it. If you cannot co-sign, you do not have a project yet.

Phase two: the evaluation harness. Four weeks. Before anyone trains anything, you build the eval. Not the metric the paper optimises, the metric the product needs. If the research team pushes back on this, that is your signal. Real researchers love a hard eval; it is the bench-press of the field.

Phase three: the smallest shippable model. Six weeks. You do not ship the SOTA. You ship the worst model that would still make the product better. This is the single hardest cultural shift and the one that matters most. Research wants to lead with the best; product wants to ship anything; the smallest shippable model is the bridge.

Artifact one is the one-pager. Artifact two is the eval harness. Artifact three is a model card with latencies and cost. Artifact four is a shadow-mode deployment on real traffic, with a kill switch, before you tell anyone outside the room.

Do this four times and you have a team that ships research. Do it four times badly and you have a team that distrusts research. The gap between those two outcomes is almost entirely about whether the first person through the door believed in the eval.

If you are a CTO right now, the question is not whether to adopt AI. It is whether your organisation can survive its own research→product latency. If that number is more than a quarter, your AI strategy is a wish list.

§ Related, keep reading
2026 · 02 · 287 min read
Hiring for AI Taste
Resumes, demos, and model evals are all lagging indicators. Here's what I screen for instead.
read →
2025 · 11 · 0914 min read
Notes on Governing AI at Hyperscale
What I learned authoring Google's company-wide AI/ML privacy framework, and how I'd rewrite it for 2026.
read →
If this was useful, the weekly Brief covers shorter ideas like this every Wednesday.
Read the Briefs →
© Omi Iyamu · MMXXVIContact → · linkedin.com/in/omiiyamu