In 2018, I wrote about how Cambridge Analytica forced the tech industry to reckon with privacy. In 2019, I argued that personalization and trust could coexist if companies earned the right to use data. In the years since, those essays have become a thread running through everything I write: the products that win long-term are the ones that treat user trust as a product requirement, not an afterthought.
AI makes this more urgent, not less. The stakes are higher, the failure modes are weirder, and the speed of deployment means mistakes reach millions of people before anyone notices.
At too many companies, responsible AI lives in a governance team that reviews features after they're built. A checklist gets filled out. A review meeting happens. The feature ships anyway because the launch date was already announced. The governance team exists to reduce liability, not to shape the product.
That's not responsible AI. That's compliance theater.
Responsible AI, done right, is a product discipline. It's baked into the design process from the beginning, not bolted on at the end. The PM owns it the same way they own the user experience, because it is the user experience. A feature that produces biased outputs, surfaces harmful content, or erodes user privacy isn't a governance failure. It's a product failure.
Transparency over magic. Users should understand, at a level appropriate to the context, what the AI is doing and why. Not the technical details of how the model works, but the practical reality: where did this answer come from? Is it generated or retrieved? How confident is the system? The temptation in AI product design is to make the AI feel seamless and invisible. But invisible AI is unaccountable AI. A user who doesn't know the AI wrote something can't evaluate whether to trust it.
Guardrails as features, not constraints. Safety guardrails often get framed as limitations: things the AI can't do, topics it can't address, outputs it can't produce. That framing creates an adversarial relationship between the product team (which wants capability) and the safety team (which wants restriction). The reframe that works better: guardrails are features that protect the user. A content filter that prevents harmful medical advice isn't a limitation. It's the product working correctly. Design guardrails as part of the experience, not as fences around it.
Harm mapping before launch. Every AI feature should go through a harm mapping exercise before it ships. Who could this hurt? How? Under what conditions? This isn't hypothetical risk assessment. It's concrete scenario planning. What happens if a user asks this feature to help with self-harm? What happens if the model produces a confidently wrong answer about a legal or medical topic? What happens if the feature works differently for different demographic groups? If you can't answer these questions, you're not ready to ship.
I've heard PMs say "responsible AI is the ethics team's job" or "safety is an engineering concern." Both are wrong. The PM decides what gets built, for whom, and under what conditions. That means the PM is the person most responsible for ensuring the product doesn't cause harm.
This isn't about being the AI police. It's about the same skill set PMs have always needed: understanding users deeply enough to anticipate how they'll interact with the product, including the ways you didn't intend.
At Firefox, this is central to how we build. Every AI feature goes through harm mapping. Every feature has an off-switch. Every user has control over what the AI does and doesn't see. Not because regulation requires it (though it increasingly does), but because trust is the product.
The companies that build AI responsibly in 2025 will have a structural advantage in 2030. Trust, once lost, is nearly impossible to rebuild. And as AI becomes more capable, more autonomous, and more embedded in high-stakes decisions, the products that earned trust early will be the ones users choose to let deeper into their lives.
I've been writing about trust and technology for seven years now. The tools have changed. The principle hasn't: the products that treat user trust as sacred are the products that last.
Responsible AI isn't a tax on innovation. It's the foundation of it.