AI Ethics in Digital Advertising

Published on February 17, 2025

I spend a lot of time thinking about what happens when AI systems make decisions that affect millions of people and nobody can quite explain why. In digital advertising, that's already the reality. Bid optimization, audience segmentation, creative selection — these are increasingly driven by models that even their builders struggle to interrogate.

That should concern us. Not because AI in advertising is inherently harmful, but because the gap between capability and accountability is widening faster than most in the industry appreciate.

The explainability problem

When I ran the Privacy Product at Microsoft Ads, one of the recurring tensions was between model performance and explainability. A more opaque model often wins on click-through rate. But when a regulator asks why a particular demographic was excluded from seeing a housing ad, "the model decided" isn't an answer. And it shouldn't be.

Explainability isn't just a compliance box to tick. It's a design constraint that forces better thinking about what signals a model should actually be using. In my experience, the best-performing systems are the ones where engineers had to justify every input feature — not just to regulators, but to themselves.

Bias isn't theoretical

Training data reflects the world that generated it, including its inequities. I've seen models that, left unchecked, would systematically under-serve ads to certain postcodes or age brackets — not because anyone intended discrimination, but because historical bid patterns had discrimination baked in.

The fix isn't to pretend bias doesn't exist or to bolt on a fairness metric as an afterthought. It requires diverse teams looking at outputs from multiple angles, and it requires continuous monitoring — not a one-off audit when something goes wrong publicly.

Privacy and data appetite

AI models are hungry. The more data you feed them, the better they perform — at least on narrow metrics. But "better targeting" achieved through mass data collection is a Pyrrhic victory if it destroys user trust or violates emerging regulation. This is why I've been a strong advocate for privacy-enhancing technologies like federated learning and differential privacy. They force you to build systems that work with less data, which paradoxically often means building systems that work better in the long run.

The thin line between persuasion and manipulation

Ad tech has always been in the persuasion business. But AI dramatically increases the precision and speed of behavioural influence. When a system can micro-target emotional states in real time, the ethical question shifts. Are we helping people find products they genuinely want, or are we exploiting cognitive vulnerabilities?

I don't think most ad tech companies are deliberately manipulative. But I do think the incentive structures — optimising for clicks, conversions, engagement — can produce manipulative outcomes if nobody stops to ask whether they should.

Governance that actually works

The gap I see most often isn't a lack of ethics principles. Every company has those. The gap is between principles and operational practice. An AI ethics board that meets quarterly and reviews high-level policy isn't governing anything. Governance means having someone accountable for every model in production, with the authority to pull it if it misbehaves.

At Prebid.org and IAB Europe, I've pushed for standards that make this kind of accountability structural rather than aspirational. It's slow work. But the alternative — waiting for a scandal to force the issue — is worse for everyone.

What I think the industry needs to do

Stop treating ethics as a PR exercise. Build explainability into model design requirements, not just documentation. Invest in bias detection that runs continuously, not annually. Accept that some targeting capabilities should be voluntarily constrained even if they're technically legal. And start having honest conversations about what "optimization" actually means when the thing being optimised is human attention.

The industry that gets this right will earn the regulatory goodwill and user trust to keep innovating. The one that doesn't will find itself regulated into a much smaller box than the one it currently struggles against.