How it works Tools Our Methodology Resources FAQ Contact us Try it free ->

AI Chat Tools

Your team has probably already started using AI Chat tools like ChatGPT and Claude. Even if nobody has told you, even if there's no policy, even if you've never discussed it — there's a good chance that somewhere in your business, someone has pasted a client email into ChatGPT to help write a response. Or summarized a meeting in Claude. Or asked Copilot to draft a proposal.

That's not a reason to panic. It's a reason to get ahead of it.

AI chat tools — ChatGPT, Claude, Microsoft Copilot, Google Gemini — are genuinely useful for work. The risk isn't in using them. The risk is in using them without understanding what leaves your business when you do. This guide explains the five things you actually need to get right, written for the person who has part-time responsibility for this and a full-time job doing everything else.

Start Here: Take The Assessment

If not done so already, take the assessment https://app.riskhelper.ai/assess/ai-chat-tools and see your results.

What Your Assessment Results Mean

The assessment you just completed places your business in one of three situations. Each has a different starting point.

You haven't deployed yet — You're in the best position. The right order is: make the policy decision, pick the account tier, run a 20-minute training session, then give access. Everything in this guide is aimed at you.

Some employees are using these tools informally — This is the most common situation in small businesses. The risk is already present, just invisible. Your immediate priority is closing the gap between what's happening and what you've sanctioned. That means getting people onto the right account type and giving them a clear rule about what can and can't go in.

You're using them and want to check your controls — You're looking for gaps. Focus especially on account tier, third-party integrations, and whether your policy covers the specific tools people are actually using — not just AI in general.

1. The Account Tier Problem: Not All ChatGPT Is the Same

This is the single most important thing to understand, and it's the thing most small businesses get wrong.

When your employees use a free or personal ChatGPT account for work, they are using a consumer product with consumer terms. By default, OpenAI may use conversation content to improve its models. Anything typed — a client's name, a financial figure, a draft contract, an internal strategy document — may become part of how the model learns. You have no contract with OpenAI governing how that data is handled, no data processing agreement, and no audit trail. If you have EU customers or employees, processing their personal data this way is likely a GDPR violation before anything else goes wrong.

The same broadly applies to free tiers of Claude, Gemini, and other tools. The consumer products and the business products are not the same thing.

Business and enterprise accounts change this fundamentally. ChatGPT Team and Enterprise accounts are opted out of model training by default — your data doesn't become training data. Microsoft 365 Copilot keeps your data within your Microsoft tenant and doesn't use it to train foundation models. Claude for Work similarly excludes conversation data from training. These accounts also come with data processing agreements, which is the legal document that makes GDPR-compliant use possible.

The practical implication: if people are going to use these tools for work, they should do it on a company account, not a personal one. The cost difference — typically $20–30 per user per month depending on the tool — is not large relative to the liability exposure of getting it wrong.

By your current situation:
- Not deployed yet: Choose a business account tier before you give access. Don't let personal accounts become the path of least resistance.
- Informal use already happening: Audit what accounts people are on. If they're on free or personal accounts, migrating to a business account is the most important action you can take.
- Controls already in place: Verify the training opt-out is actually applied. With enterprise accounts, this is the default — but check. It's worth five minutes.

OpenAI — Business and Enterprise Privacy Controls

OpenAI's own documentation on what data is and isn't used for training under different account tiers. The definitive source for understanding what "opted out of model training" actually means in practice.

Microsoft — Data, Privacy, and Security for Microsoft 365 Copilot

Microsoft's documentation on how Copilot handles your data, including the tenant boundary and what protections apply to your business content.

2. The "What Not to Type" Rule: One Sentence That Prevents Most Incidents

Most data incidents involving AI chat tools don't result from a security breach. They happen because an employee, trying to be helpful and efficient, pastes something into a chat window without realizing it constitutes a security/privacy/contractual breach.

A customer email with names and addresses. A staff member's performance review. A draft contract with commercial terms. Source code. Financial forecasts. Medical records. These things end up in AI chat tools every day, in businesses of every size, because nobody told the person typing that there was a problem with it.

The most important single intervention you can make — before policy, before training, before anything else — is a clear, specific rule about what cannot be typed in. Not "be careful with sensitive data" (that's too vague), but a concrete list.

The minimum viable "do not type" list for most small businesses:
- Customer or client names, email addresses, phone numbers, or any other personal information about identifiable individuals
- Financial data — account numbers, revenue figures, forecasts, payroll details
- Passwords, API keys, or any access credentials
- Patient records, medical information, or anything covered by health privacy rules (if applicable)
- Draft contracts, legal correspondence, or legally privileged communications
- Proprietary source code, product designs, or trade secrets
- Any document marked confidential

Post this somewhere visible. Put it in the onboarding. Make it the first thing covered in any AI training session. The goal isn't to stop people from using AI — it's to create a clear moment of pause before they paste something they shouldn't.

NCSC — Generative AI: Learn How to Use It Safely

The UK's National Cyber Security Center guidance on safe AI use for organizations, including practical advice on what types of content carry the highest risk when entered into AI tools.

3. Shadow AI: Why Banning Doesn't Work

Here's something that's been well-established by research: if you ban AI tools without providing an alternative, most employees keep using them anyway. They just use personal accounts on personal devices, outside any visibility you have. A formal ban achieves the opposite of what it intends — it pushes use underground and removes the last vestiges of control.

The data is striking. More than 80% of workers report using unapproved AI tools at work. IBM's 2025 Cost of a Data Breach report found that one in five organizations has already experienced a breach linked to shadow AI. The Samsung incident — where engineers pasted proprietary semiconductor source code into free ChatGPT multiple times within a single month — has become the defining case study, but it's representative of what's happening at scale across businesses of every size.

The reason shadow AI happens is straightforward: employees find genuine value in these tools, and the free consumer versions are easy to access. If your organization doesn't provide a sanctioned alternative that's actually good enough to use, people will find their own.

The governance-first response — provide an approved tool, set clear boundaries, give people a way to do the right thing easily — consistently outperforms prohibition in every studied context. Healthcare organizations that provided approved AI alternatives saw an 89% reduction in unauthorized tool use. The pattern repeats across industries.

For a small business, the practical translation is: pick one tool, get the right account tier, make it easy for employees to access it from where they already work, and make the approved version clearly better than using a personal account. You don't need to block every AI URL. You need to make the sanctioned option the default.

Vectra AI — Shadow AI Explained

A thorough breakdown of what shadow AI is, why it happens, and what the research says about governance-first approaches vs. prohibition. Useful for building the internal case for providing sanctioned tools rather than banning use entirely.

4. Policy and Training: What Minimal Viable Governance Looks Like

A formal AI policy doesn't need to be a 20-page document. For a small business, the essential elements fit on one page, and the minimum viable version can be drafted in an afternoon.

What an AI acceptable use policy needs to cover:

Which tools are approved — Name them specifically. "ChatGPT (business account only, not free or personal)" is clearer and more enforceable than "AI tools that comply with our data policy."

What cannot be typed in — The list from Pillar 2, verbatim. This is the most important clause.

What the rules are for AI-assisted work product — AI outputs need human review before they go to clients, customers, or regulators. A clause covering this prevents the liability of sending a factually wrong or hallucinated document on company letterhead.

Who to ask if unsure — Name a person. In a small business, that's probably you. The goal is to create a low-friction path to asking rather than guessing.

What happens if the policy is breached — It doesn't need to be punitive, but it needs to exist. "Violations will be addressed through our standard disciplinary process" is sufficient.

Training doesn't need to be elaborate. A documented 20-minute session covering what these tools are, what can't be typed in, and how to report a concern is enough for most small businesses. The word "documented" matters — if you ever need to demonstrate that you took reasonable steps to prevent a data incident, your training attendance record is the evidence.

By your team size:
- Solo or 2–10 employees: A one-page policy and a single team conversation with notes. That's your documented training record.
- 11–50 employees: Written policy distributed and acknowledged. A brief onboarding session for new starters. Annual refresh.
- 51+ employees: Formal policy with sign-off. Training tracked by manager. Consider a quick-reference guide posted in shared tools.

SHRM — Generative AI Usage Policy Template

The Society for Human Resource Management's template, adapted for workplace use. A practical starting point rather than writing from scratch.

ICO — Guidance for Organisations Using Generative AI

The ICO's guidance on what GDPR compliance looks like when your organization uses AI tools, written for non-specialists. Essential reading for any business with UK or EU users or employees.

5. Integrations: The Hidden Risk Surface

Most small businesses start with the simplest use case: someone opens a chat window, types something, gets a response. That's relatively low risk when the right account tier is in place and the "what not to type" rule is followed. Where risk escalates significantly is when AI tools get connected to your other systems.

Integrations are how AI tools go from being a research and drafting assistant to being an agent that can read your email, access your files, write to your CRM, and take actions on your behalf. The value is real — the productivity gains from AI that can see your calendar, draft from your documents, and update your customer records are substantial. But each connection is also a new exposure surface, and in a small business those integrations often get set up quickly by whoever is most enthusiastic about the tools, without a systematic review of what data is now flowing where.

The specific risk with integrations is that the "what not to type" rule no longer fully applies. If your AI tool is connected to your Google Drive, it can access files without you manually pasting them. If it's connected to your CRM, it can see your customer database. If it's connected to your email, it can read client correspondence. The protection that comes from a deliberate human decision to paste something in disappears when the system is pulling data automatically.

Before enabling any integration:

- Know exactly what data the AI can access — if you're connecting a CRM, which fields? If you're connecting email, which accounts and how far back?
- Check whether the integration is covered by your business account's data protections — some third-party integrations route data through different systems that may not have the same contractual protections as the core tool
- Verify your existing access controls — Microsoft Copilot's data boundary only protects you if your SharePoint permissions are correctly set; if employees have broader access than they should, Copilot will surface documents they were never supposed to see

For Microsoft 365 users specifically: Copilot will see everything your permissions allow. If your SharePoint is a mess of over-permissioned folders from years of convenience decisions, Copilot will expose that immediately. The right time to tidy up permissions is before you enable Copilot, not after.

Microsoft — Learn About Copilot Data Protection

Covers exactly which data Copilot can access, how the tenant boundary works, and what you need to have in place before enabling integrations. The most practically useful reference for small businesses already in the Microsoft ecosystem.

Putting It Together

The goal isn't to stop your team using AI. It's to make sure the way they use it doesn't expose your business, your clients, or your employees to a risk you didn't choose to take.

The four decisions that resolve most of the risk for most small businesses are: get people onto business accounts, give them a clear rule about what not to type, write a one-page policy, and run a short training session. None of those take more than a day to implement. None of them require a specialist. And all of them put you in a materially better position than the majority of small businesses in the same situation.

The productivity gains from these tools are real. The risks are manageable. The businesses that handle this well aren't the ones that banned everything — they're the ones that set simple, clear expectations and made the approved path easy to follow.

Privacy Risk (General)

Privacy gets mischaracterized as a legal problem. Someone in compliance raises a concern, legal reviews it, your roadmap slows down, and you ship a watered-down version of whatever you were building. Then it happens again.

That framing gets the causality backwards. Privacy isn't a tax on your product — it's a dimension of product quality. The same way a product that crashes is a bad product, a product that handles data carelessly is a bad product. It erodes trust, increases churn, attracts scrutiny, and eventually fails in ways that are expensive to reverse.

This guide covers five pillars of privacy that product managers need to own — not because regulators are watching (though they are), but because users are making decisions about your product based on how well you demonstrate you respect their information.

Your Regulatory Context in One Paragraph

If your product operates across the US, UK, EU, and Canada, you are subject to four overlapping but distinct privacy frameworks. The EU's GDPR is the most demanding: opt-in consent required for most processing, strict rules on data transfers, fines up to 4% of global revenue. The UK GDPR is substantively identical post-Brexit, enforced by the ICO. Canada's PIPEDA is GDPR-adjacent but consent-focused, with less prescriptive enforcement — though Quebec's Law 25 has tightened requirements considerably since 2023. The US has no federal law, but California's CCPA/CPRA is effectively the national standard for any product with meaningful scale, giving users rights to access, delete, and opt out of data sales, with 20+ other states following in some form.

The practical takeaway: build to GDPR-level rigor and you satisfy the requirements in all four markets. That's not legal advice — it's a design shortcut. The remainder of this guide focuses on what that means in product terms, not in legal ones.

1. Data Minimization: Collect What You Need, Not What You Might Use Someday

The most common privacy mistake isn't malicious — it's a product culture that treats data collection as a default. If a field could be useful, collect it. If a behavior could reveal something interesting, track it. Over time this creates a sprawling data footprint that no one fully understands, with risk embedded throughout.

Data minimization inverts that logic: start from what the product genuinely needs to function, and don't collect anything beyond that without a clear, specific reason. Every data point you don't hold is a data breach you can't have. It's also storage you don't pay for, complexity you don't carry, and liability you don't hold.

This principle applies not just to what you collect at signup, but to what you retain over time. Data that was useful at one point in your product's history often accumulates indefinitely because no one built the process to remove it. Retention schedules — predefined rules for how long each category of data is kept — are a simple operational discipline that most products don't implement until a breach or a regulator asks.

By data sensitivity:
- Low sensitivity (usage analytics, feature telemetry): Collect, but aggregate quickly. Prefer anonymous or pseudonymous data where the business purpose allows.
- Medium sensitivity (account data, behavioral profiles, purchase history): Define retention periods. Audit quarterly whether you're still using what you're storing.
- High sensitivity (health data, financial data, precise location, communications content): Apply strict minimization from day one. Collect only what the feature requires. Obtain explicit, informed consent. Treat retention as a risk exposure.

ICO — Privacy in the Product Design Lifecycle

The UK's Information Commissioner's Office wrote this guidance specifically for PMs, UX designers, and engineers. It covers how to embed data minimization and privacy thinking across every stage from kick-off to post-launch. One of the most practically useful official resources on this topic.

IAPP — Redefining Data Mapping

From the International Association of Privacy Professionals: a clear explanation of what data mapping actually means in practice, why it's the foundation of any minimization effort, and how to connect your compliance obligations.

2. Consent and Transparency: Honest Communication, Meaningful Choice

Consent is one of the most commonly misunderstood concepts in privacy. Many products treat it as a box to check — a cookie banner that appears, a privacy policy that exists, an "I agree" that users click. That's not consent. That's documentation theater.

Genuine consent has four properties: it's informed (the user actually understands what they're agreeing to), freely given (saying no doesn't break the core product experience), specific (agreeing to one thing doesn't mean agreeing to everything), and revocable (users can change their mind and the product respects it). Under GDPR — and increasingly everywhere else — anything less doesn't qualify.

The failure mode here is dark patterns: design choices that manipulate users into consenting to things they wouldn't otherwise choose. Burying the "reject all" option three screens deep. Pre-checked boxes that assume consent. Making the product noticeably worse when a user declines tracking. These tactics generate higher consent rates in the short term and erode trust systematically over the long term — while attracting exactly the kind of enforcement attention that halts roadmaps entirely.

The counter-approach is straightforward: design consent flows with the same craft you'd apply to any user-facing feature. The clearest test is symmetry — declining should be as easy as accepting. If it isn't, you have a dark pattern.

By processing purpose:
- Functional processing (data needed for the product to work): No consent required, but transparency is. Be explicit about what you collect and why.
- Analytics and improvement: Consent required in the EU/UK. Design opt-in flows that explain the benefit to users, not just to your team.
- Marketing and behavioral advertising: Highest bar. Explicit opt-in everywhere. Make opting out easy and honor it completely — including downstream to any ad partners.

EDPB Guidelines on Dark Patterns

The European Data Protection Board's official guidance on dark patterns in consent interfaces. Concrete, specific, and directly applicable to product and UX decisions. Required reading for anyone designing consent flows for EU/UK users.

OneTrust — Principles of Privacy by Design

A practical primer on the seven Privacy by Design principles, with particular attention to transparency and consent as product design problems, not legal ones.

3. User Rights: Access, Deletion, and Portability as Product Features

Your users have the right to ask what data you hold about them, to request that you delete it, and — in many jurisdictions — to receive it in a portable format so they can take it elsewhere. These aren't abstract legal entitlements. They're features you have to build, with UX, workflows, response SLAs, and operational processes behind them.

Most products treat Data Subject Requests (DSRs) as an exception-handling problem — something the legal or support team deals with when someone complains. That works at very small scale. It breaks badly when your product grows, when regulatory scrutiny increases, or when a request arrives from a user who knows their rights and is paying attention to how you respond.

The better frame is to treat user rights as a product category. Good DSR handling — fast, frictionless, transparent about what you're doing and why — is a trust-building moment. Poor DSR handling — slow responses, confusing processes, partial fulfillment that the user has to chase — is a trust-destroying one. Under GDPR and UK GDPR, the response deadline is one month. Under CCPA, it's 45 days. Both are shorter than most teams realize until they're already late.

Rights also create a secondary obligation: if a user asks you to delete their data, you have to be able to actually do it. That sounds obvious, but it requires knowing where all your data lives — in your product database, your analytics tools, your data warehouse, your marketing platform, your support system, and any third parties you've shared it with. Products that never built a data map discover this the hard way.

By interaction level:
- Low complexity (small user base, centralized data): A documented manual process with clear ownership is sufficient. Track all requests and responses.
- Medium complexity (growing user base, data spread across a few systems): Build a user-facing request mechanism (a form, a settings page, a support flow). Integrate your main data systems to fulfill deletions efficiently. Define and meet response SLAs.
- High complexity (large user base, data in many systems including third parties): Invest in automation. Map every system that holds user data. Establish downstream deletion flows to vendors and sub-processors. Report on fulfillment rates and times as operational metrics.

ICO — Data Protection by Design and by Default

The ICO's core guidance on building privacy rights into systems by design, including how to architect products so that fulfilling access and deletion requests is operationally feasible rather than a manual nightmare.

IAPP — Privacy Engineering: The What, Why and How

A clear-eyed overview of what it means to translate privacy rights requirements into technical realities, and why this is increasingly a product engineering discipline rather than a legal one.

4. Third-Party and Vendor Risk: Your Data Supply Chain Is Your Problem

One of the biggest gaps between how PMs think about privacy and how regulators think about privacy is the question of who's responsible when a vendor misuses data. The answer under every framework discussed here is the same: you are, whether or not the vendor caused the problem.

When you integrate a third-party analytics tool, an advertising SDK, a customer support platform, or a data enrichment service, you are sharing your users' data with that vendor. You're also implicitly vouching for how they handle it. Under GDPR, if your processor mishandles data, you — as the controller — bear responsibility. The enforcement actions against Sephora, DoorDash, and dozens of others have established that "we didn't know the vendor was doing that" is not a defense.

This means two things for PMs. First, every new vendor integration that involves user data needs a privacy review before it ships — not as a formality, but as a genuine assessment of what data flows, where it goes, and whether the vendor's handling meets your users' expectations and your regulatory obligations. Second, your privacy notices and consent flows have to actually reflect the third-party data sharing that occurs in your product. If you have a pixel that sends behavioral data to an ad network, and your privacy policy doesn't clearly disclose that, you have a problem regardless of whether the pixel is technically invisible to users.

By vendor type:
- Infrastructure vendors (cloud hosting, logging, monitoring): Lower risk if they're acting purely as processors with no independent data access. Ensure your contracts include appropriate data processing terms.
- Product and analytics vendors (analytics platforms, A/B testing tools, session recording): Moderate risk. Understand what data flows to them and whether consent is required. Review their data retention and deletion practices.
- Marketing and advertising vendors (ad networks, email platforms, CRM systems): Highest risk. These vendors often use data for their own purposes beyond your product. Classify these relationships carefully — under CCPA, many constitute "data sales" requiring opt-out mechanisms.

IAPP — Third-Party Vendor Management Means Managing Your Own Risk

A practical series on building a vendor management program that actually works, including how to tier vendors by risk level, what due diligence looks like at each tier, and how to maintain oversight at scale.

5. Privacy by Design: Embed It, Don't Bolt It On

Every pillar in this guide is easier — and cheaper — when privacy is considered at the start of development rather than added as a layer after the product is built. The Privacy by Design principle, codified in law by GDPR Article 25 and reflected in every major framework your product is subject to, has a simple premise: design your product so that the privacy-respecting behavior is the default, not the exception.

In practice, this means asking privacy questions at the right moments in your development process: during kick-off (what data does this feature actually need?), during design (how do we communicate this to users honestly?), during development (are we storing this data securely, with the right retention policies?), and at launch (have we tested what happens when a user exercises their rights?). These questions don't require a legal expert in the room at every sprint — they require PMs who know to ask them and engineers who know how to answer them.

The financial case is straightforward. IBM's 2024 Cost of a Data Breach report found that organizations with mature privacy programs saved an average of $1.5 million per breach compared to less-developed programs. The cost of retrofitting privacy into a product that wasn't designed with it in mind — technically, operationally, and reputationally — consistently exceeds the cost of doing it right the first time.

By development stage:
- Kick-off and research: Define your data model before you define your feature. Map what personal data your product touches and why. Challenge every "we might need this later" data collection decision.
- Design and development: Default settings should favor privacy. Encrypt data in transit and at rest. Define retention policies before data accumulates, not after. Avoid collecting real user data in development and test environments.
- Launch and post-launch: Conduct a privacy review before major releases, not after. Monitor for privacy incidents and have a response plan. Treat new features as new data processing activities that need review.

ICO — Designing Products That Protect Privacy

The ICO's full hub for product-focused privacy guidance, including their detailed lifecycle guidance and supplementary resources on specific design challenges. Written for technology professionals, not lawyers.

NIST Privacy Framework

The US counterpart to NIST's Cybersecurity Framework. A voluntary, practical tool for identifying and managing privacy risk across an organization's operations and products. Particularly useful for teams that need a shared language across legal, engineering, and product.

Putting It Together

Privacy done well isn't invisible to users — it's visible in the right ways. It's a consent flow that makes sense on first read. It's a data deletion request that completes in days, not weeks. It's a settings page where the privacy-respecting default doesn't require three clicks to find. It's a product that works the same way whether or not the user opted out of tracking.

Each of those moments is a design decision. And each of them either builds or spends the trust that keeps users coming back. The PMs who understand this treat privacy as a product discipline — the same craft they bring to performance, reliability, and user experience. The ones who don't tend to find out why it matters through an enforcement action, a data breach, or a wave of users canceling because they read something in the news.

Trust is the asset. Privacy is how you protect it.

AI Risk and Responsible AI

You don't need a philosophy degree to build responsible AI products. You need a practical framework — one that helps you ship faster, protect your users, and avoid the kind of headlines that take years to recover from.

That's what Responsible AI (RAI) actually is: a set of operational disciplines that reduce risk, build customer trust, and protect your brand's ability to keep innovating. Think of it less as a constraint on your roadmap and more as the engineering spec for long-term product health.

This guide covers the five pillars every PM should understand, with concrete guidance scaled to your product's risk level.

Start Here: Know Your Risk Level

Before applying any of the pillars below, calibrate your context. A recommendation widget on a retail site and an AI-assisted medical triage tool are both "AI products" — but they need different levels of rigor. Use our assessment tools to help find your level of risk.

Low Risk — outputs are informational, reversible, or low-stakes (recommendations, content generation, search)
Medium Risk — outputs influence decisions with meaningful consequences (hiring screening, financial guidance, customer service routing)
High Risk — outputs directly affect safety, rights, or significant financial outcomes (credit decisions, healthcare, legal, identity verification)

When in doubt, round up.

1. Accountability: Own What Your AI Does

In traditional software, a bug is a mechanical failure with a traceable cause. In AI, errors can be emergent — unexpected outputs that no single engineer designed. That means someone needs to own the system's behavior holistically, and your organization needs to be able to explain how it was built, why it made a decision, and what was in place to prevent harm.

Without clear accountability, your product is a liability. With it, you have the foundation for continuous improvement.

By risk level:
- Low: Document your decision logic at a high level. Include AI behavior in your standard release process.
- Medium: Assign a named owner for AI-related issues. Run a weekly review of user-reported problems.
- High: Build Human-in-the-Loop (HITL) checkpoints into the product — the AI presents options, a qualified human authorizes the outcome. Establish a governance committee with quarterly performance reviews.

NIST AI Risk Management Framework

The US government's voluntary framework for governing AI across the full product lifecycle. The Govern, Map, Measure, and Manage structure is the closest thing to an industry standard for accountability programs.

OECD AI Principles

The international baseline. Five core principles (including accountability and transparency) adopted by over 40 governments, updated in 2024 to address generative AI.

2. Transparency: Don't Hide the Machine

Users should always know when they're interacting with an AI. This sounds obvious, but it's routinely underbuilt — a small label in the footer is not the same as genuine transparency.

Transparency also means being able to explain why the system produced a given output, not just what it produced. In high-stakes contexts, a black-box decision — a loan denial, a content removal, a screening rejection — without any explanation feels arbitrary at best and discriminatory at worst. Explainability turns your AI from an oracle into a reliable tool.

By risk level:
- Low: Label AI-generated content clearly. Set honest expectations about what the system can and can't do.
- Medium: Surface confidence signals. If the AI is uncertain, design the UI to say so and prompt the user to verify.
- High: Build explainability into the product architecture, not as an afterthought. The system must be able to surface the primary factors behind any decision. All logic must be auditable.

Google PAIR Guidebook — Explainability + Trust

Practical, PM-friendly guidance from Google's People + AI Research team on how to design AI explanations that users actually understand. Covers general system explanations vs. specific output explanations with real product examples.

3. Fairness: Build for Everyone, Not Just Your Test Users

AI models are trained on historical data — which means they inherit historical biases. If you don't actively audit your product, you risk building something that works well for one group of users and quietly fails another.

This isn't just an ethics problem. It's a product quality problem, a legal risk, and a brand problem. Catching it early in development is far cheaper than addressing it after launch.

By risk level:
- Low: Use representative training data and run standard QA. Watch for obvious stereotypes or output skews.
- Medium: Conduct sub-group analysis across key demographics — don't just measure aggregate performance. Give users a clear, easy path to report perceived bias.
- High: Require formal bias audits against diverse datasets before deployment. Consider fairness constraints that mathematically limit disparate impact across protected groups.

IBM AI Fairness 360

An open-source toolkit for detecting, measuring, and mitigating bias in machine learning models. Includes tutorials for common high-stakes use cases like credit scoring and healthcare.

Microsoft Fairlearn

A community-driven Python toolkit that helps teams assess and improve model fairness. Particularly useful for teams who want to run sub-group performance analysis without deep ML expertise.

4. Safety and Robustness: Plan for the Adversarial User

Your users won't all behave as intended. Some will be confused. Some will be curious. Some will deliberately try to break your product — feeding it unexpected inputs, attempting prompt injection attacks, or probing for outputs you'd never want associated with your brand.

Safety means building guardrails so the system behaves predictably under stress. Robustness means it doesn't produce toxic, harmful, or illegal content when those guardrails are tested. This is the baseline: before you can build for delight, you have to build for "do no harm."

By risk level:
- Low: Implement basic input validation. Ensure the system degrades gracefully on unexpected inputs rather than crashing or producing nonsense.
- Medium: Deploy content filters. Review user interaction logs regularly to identify emerging attack patterns and edge cases.
- High: Integrate adversarial testing ("red-teaming") into your sprint cycles — dedicate time to intentionally trying to break the model. Build a kill switch that lets you disable or constrain AI features instantly if something goes wrong.

Microsoft AI Red Team — Planning Guide

A practical methodology for setting up red teaming across the LLM product lifecycle. Covers team composition, harm categories, and how red teaming integrates with (rather than replaces) systematic safety measurement.

5. Privacy and Human-Centricity: Keep the User in Control

Privacy isn't just a legal requirement — it's a signal about how much you respect your users. How you handle their data directly shapes how much they trust your product.

Human-centricity is the related principle that the AI should work for the user, not on them. That means designing to empower judgment, not replace it — and actively avoiding "dark patterns" where the AI nudges users toward outcomes that benefit your business at their expense. The distinction between a helpful assistant and a manipulative one often comes down to design choices PMs make at the feature level.

By risk level:
- Low: Collect only what you need. Be explicit in your terms about how user interactions may be used.
- Medium: Give users meaningful control — the ability to opt out of data collection or model training without losing access to core features.
- High: Default to privacy-protective architecture: data masking, anonymization, on-device processing where feasible. Ensure the user remains the agent of their own decisions at every step.

Privacy by Design — AI Design Patterns

Concrete UX patterns for building privacy-first AI products, with real examples from Apple, Signal, and DuckDuckGo. Useful for translating the principle into actual feature decisions.

Google PAIR Guidebook — User Needs & Mental Models

The human-centricity counterpart to explainability. Covers how to understand what users actually need from AI features and how to design for appropriate reliance rather than over-trust or avoidance.

Putting It Together

None of these pillars requires you to slow down your roadmap. Applied proportionally to your risk level, they're the difference between a product that scales with trust and one that accumulates quiet technical and reputational debt until something breaks publicly.

The PMs who get this right don't treat Responsible AI as a compliance checklist. They treat it as product craft — the same attention to detail they bring to performance, usability, and retention. Your users can't always articulate what makes them trust a product, but they can always tell when they don't.

Online Safety Risk

The conversation about online safety often starts in the wrong place. It starts with regulation — laws your legal team is worried about — rather than with users and what happens to them when your product works as designed, or when it doesn't.

That framing is a trap. It turns online safety into a compliance exercise, which means it gets resourced like one: minimally, reactively, and always too late. Products that treat safety as a legal obligation tend to build the minimum viable guardrails. Products that treat safety as a core quality dimension build platforms where people actually want to spend time.

This guide covers the five pillars of online safety that matter most to product managers — not because regulators are watching, but because your users are.

Start Here: Know What Kind of Platform You're Building

Online safety risk scales with the nature of your platform and the behaviors it enables. A read-only product with no user interaction carries almost no safety surface area. A platform where users generate content, communicate with each other, transact, or interact with vulnerable populations is a fundamentally different problem.

Before applying the pillars below, categorize your platform honestly:

Low Interaction — users consume content you produce or curate; minimal user-to-user contact (media sites, productivity tools, informational apps)
Medium Interaction — users generate content or interact in structured ways (reviews, comments, ratings, limited messaging)
High Interaction — users communicate freely, build communities, or interact in real time (social platforms, marketplaces, gaming, dating apps, forums)

The higher the interaction level, the more seriously you need to take every pillar below. And if your platform reaches minors — even incidentally — treat every pillar as high priority regardless of where else you'd place yourself.

1. Content and Conduct Policy: Define the Rules of the Road

Every platform that hosts user behavior needs a clear answer to the question: what's allowed here, and what isn't? That answer is your content and conduct policy — the set of rules that governs what users can say, share, and do.

Policy is a product. The boundaries you set directly shape the community you attract, the harms you prevent, and the trust your users extend to your platform. A policy that's too vague gets gamed. One that's overly restrictive stifles the engagement that makes your product valuable. Getting it right requires real thought about your users, your platform's purpose, and the specific harms your product is most likely to enable.

Good policies share a few characteristics: they're written in plain language people actually understand; they explain why a rule exists, not just that it does; they're specific enough to be enforced consistently; and they're updated as new harms emerge.

By interaction level:
- Low: Basic terms of service are sufficient. Focus on what users can do, not just what they can't.
- Medium: Develop a clear community guidelines document, separate from legal terms, that covers the most likely harm categories for your context (spam, fraud, harassment, misinformation). Make it easy to find.
- High: Treat policy as a living document with a dedicated owner. Run regular reviews to catch gaps as your product and user behavior evolve. Publish your policies openly and notify users of substantive changes.

TSPA Trust & Safety Curriculum — Creating and Enforcing Policy

The industry-standard free curriculum from the Trust & Safety Professional Association. The policy chapter covers how practitioners define, scope, and operationalize content policy at scale.

Digital Trust & Safety Partnership — Best Practices Framework

A practical framework from a coalition of major technology companies covering how to govern user conduct, enforce standards, and build trust with users through consistent policy application.

2. Harm Detection and Moderation: Don't Wait for Reports

The most expensive online safety mistake is building entirely reactive systems — waiting for users to flag problems before you act. By the time a report lands, the harm has already happened: someone was harassed, exposed to illegal content, or defrauded. Your moderation system should be designed to catch the most serious harms before they reach their intended target wherever possible.

This doesn't mean automating everything. Automated detection is fast and scalable, but it's blunt — it generates false positives that frustrate legitimate users, and misses nuanced harms that require context to understand. The standard in the industry is a layered approach: automation handles volume and catches the obvious, human review handles complexity and edge cases, and both feed back into each other over time.

Critically, moderation quality is a product metric, not just an operational one. False positives (removing content that shouldn't have been) erode trust just as much as false negatives (missing content that should have been removed). Tracking both — and building appeals processes that surface systematic errors — is how you improve over time.

By interaction level:
- Low: Basic spam and fraud filtering. Focus on account-level signals rather than content.
- Medium: Deploy automated filters for high-severity harm categories (illegal content, spam, clear policy violations). Establish a review queue for escalations and user reports with defined response SLAs.
- High: Invest in layered detection combining automated classifiers, behavioral signals, and human review. Build measurement systems that track both false positive and false negative rates. Plan for content volume to grow faster than your team.

TSPA — What Is Content Moderation?

A clear overview of how moderation systems work, how the product, policy, and operations functions interact, and why the collaboration between them matters for quality.

Thorn/Safer — Product Manager's Guide to Content Moderation Solutions

Practical guidance specifically for PMs on evaluating moderation tooling, prioritizing harm types, and building a business case for investment. Grounded in real platform challenges.

3. User Controls and Reporting: Give People the Tools to Protect Themselves

Even the best moderation infrastructure won't catch everything. The other half of your safety system is the set of tools you give users to protect themselves — and the reporting mechanisms you build so users can flag what your automated systems miss.

This is where online safety becomes a UX problem. A reporting button buried three menus deep is not a reporting button — it's a liability hedge. Effective user controls are prominent, simple, and fast: easy to find in the moment someone needs them, easy to use without creating friction for safe interactions, and designed for the specific context where harm is most likely to occur.

After a user takes action — reporting content, blocking someone, submitting an appeal — they need to know what happened. Closing the loop with users is one of the most consistently underbuilt parts of online safety systems, and one of the most impactful for trust. When users feel heard, they stay. When their reports disappear into a void, they leave.

By interaction level:
- Low: Provide a clear, easy-to-find path to report problems. Basic block/mute controls if any user-to-user interaction exists.
- Medium: Build in-product reporting flows for the most likely harm categories. Send status updates when reports are resolved. Offer basic privacy controls (who can see my content, who can contact me).
- High: Design safety controls as first-class features — not afterthoughts. Include granular blocking, muting, and visibility controls. Build an appeals process for enforcement actions. Measure how often users actually engage with these tools and whether they feel they worked.

eSafety Commissioner — Empowering Users to Stay Safe Online

Australia's eSafety Commissioner publishes detailed, practical guidance on building effective user reporting tools, appeals flows, and safety information. One of the most PM-applicable government resources on this topic.

4. Protecting Vulnerable Users: Design for the Most at Risk

Every platform has a general population of users — and within that population, groups who are more susceptible to harm. Children are the most obvious example, but they're not the only one. Elderly users, people experiencing mental health crises, survivors of domestic abuse, users with low digital literacy — all of these groups interact with your product in ways that require specific design consideration.

The key principle here is that you're not designing a different product for vulnerable users. You're designing a baseline product that doesn't exploit psychological vulnerabilities or rely on users being able to protect themselves from harm the platform creates. Dark patterns — design choices that manipulate users into actions against their own interests — are the clearest example of this failure. Infinite scroll, fear-of-missing-out notifications, deliberately confusing privacy settings, default-on data sharing — all of these shift cost onto your users in order to extract value for your business.

Safety-oriented design goes in the opposite direction: defaults that protect rather than exploit, interfaces that make safe choices easy and clear, and product decisions that optimize for user wellbeing alongside engagement.

By interaction level:
- Low: Audit your product for dark patterns. Remove default settings that share data or increase exposure without explicit user consent.
- Medium: Assess whether your product is likely to reach minors. If so, apply stricter defaults regardless of formal age verification. Avoid engagement mechanics (streaks, FOMO notifications, compulsive loops) that manipulate rather than genuinely serve users.
- High: Conduct explicit vulnerability assessments before major feature launches. Consider separate or constrained experiences for users who may be at higher risk. Build crisis pathways — for users encountering self-harm content, for example — into the product experience itself, not just into policy documents.

TSPA — Safety by Design: What It Is & Why It Matters

Covers the conceptual framework for thinking about harm types, vulnerable populations, and the difference between preventing all harm and proactively reducing foreseeable harm.

Brookings — Using Safety by Design to Address Online Harms

A policy-grounded but accessible overview of how Safety by Design thinking translates into product decisions, including a useful discussion of dark patterns and choice architecture.

5. Transparency and Accountability: Show Your Work

Users who can't understand how your platform enforces its rules — or whether it enforces them at all — can't meaningfully trust it. Transparency is how you make your safety commitments real rather than theoretical.

This has a few dimensions. The first is rule transparency: are your policies easy to find, written plainly, and updated when they change? The second is enforcement transparency: when you take action on a user's content or account, do you tell them why, and do you give them a path to appeal? The third is systemic transparency: do you publish aggregate data on how your platform is performing against its safety commitments, so users and the public can assess whether your policies are more than words?

The third dimension is the one most PMs underinvest in. Transparency reporting — publishing data on reported content, enforcement actions, appeal outcomes, and emerging harm trends — builds credibility in a way that policy documents alone cannot. Major platforms that publish regular transparency reports consistently score higher on user trust metrics than those that don't. This is increasingly expected by sophisticated users even before it's required by regulation.

By interaction level:
- Low: Make your policies easy to find and written in plain language. State clearly what happens when rules are broken.
- Medium: Notify users when enforcement actions affect them. Provide a clear appeals path. Document enforcement decisions internally so you can identify and correct patterns over time.
- High: Publish a regular transparency report covering key safety metrics, enforcement volumes, appeal outcomes, and how the platform is evolving its approach to safety. Treat this as a brand asset, not a regulatory filing.

TSPA — Transparency Reporting

The professional standard for what a meaningful transparency report covers, how to structure it, and how to think about what metrics to publish. Written by practitioners for practitioners.

eSafety Commissioner — Safety by Design Principles

The full set of Safety by Design principles, including the accountability and transparency components. A benchmark for assessing your own practices.

Putting It Together

Online safety isn't a single feature or a policy document. It's the cumulative effect of hundreds of design decisions, made across five interlocking disciplines, that determines whether your platform is a place users trust with their time and their wellbeing.

The PMs who do this well don't treat it as a separate workstream. They ask the safety question alongside every other product question: Who could be harmed by this feature? What signals would tell us if that's happening? What would a user do if something went wrong? That instinct, applied consistently, is what separates platforms that age well from platforms that don't.

Trust is slow to build and fast to lose. The investment you make in safety now is the foundation of the brand you'll have in five years.

Children's Risk Online

There's a version of this topic that's easy to dismiss. Your product isn't for kids. You don't target minors. Your terms of service say users must be 13 or older. Job done.

That version is wrong, and it's getting more expensive to be wrong about it.

The trigger for children's protection obligations across every major jurisdiction isn't whether your product is designed for children. It's whether children are likely to access it. A gaming platform, a social app, a marketplace, a news site, a communication tool — if children realistically use your product, you have obligations to them that are distinct from your obligations to adult users. The regulatory landscape across the US, UK, EU, and Canada has spent the last five years closing every gap that "we didn't target kids" used to cover.

More importantly: children are users. They deserve products designed with their wellbeing in mind, not products designed for adults where children happen to show up and the design works against them. This guide covers the five areas of product practice that matter most when children are in your user base.

Your Regulatory Context in Two Paragraphs

The four markets your product operates in approach this differently but are converging fast.

In the US, COPPA (Children's Online Privacy Protection Act) covers children under 13 and was significantly updated in 2025 — the revised rule now requires separate parental consent for data collection and for third-party sharing, and broadens the definition of personal information to include biometrics and persistent identifiers. The updated rule's compliance deadline is April 2026. California's Age-Appropriate Design Code (modeled closely on the UK Children's Code) has had a contested legal path but has influenced product practices broadly. The UK's ICO Age Appropriate Design Code — also called the Children's Code — has been enforceable since 2021 and sets 15 standards for any online service likely to be accessed by anyone under 18. It is the most detailed and operationally demanding framework in this space. The EU's GDPR sets 16 as the age of digital consent (member states may lower it to 13), and the Digital Services Act adds obligations around recommender systems and advertising targeting for minors. Canada has no dedicated children's privacy law yet, but PIPEDA treats children's data as inherently sensitive, the Office of the Privacy Commissioner (OPC) launched a children's privacy code consultation in 2025, and enforcement guidance is tightening materially.

The practical design shortcut — as with general privacy — is to build to the UK Children's Code standard. It is the most specific and operationally detailed framework, and compliance with it satisfies the spirit and substance of what every other jurisdiction in this list is moving toward.

1. Know If You're In Scope: "Likely to Be Accessed" Is a Wide Net

The most important thing a PM can do on this topic is make an honest assessment of whether children use their product — not whether children are the intended user. The UK Children's Code and most successor frameworks use a "likely to be accessed by children" standard, not a "directed at children" standard. That distinction is load-bearing.

A social platform that technically requires users to be 16, but has no mechanism to verify age and is routinely used by 12-year-olds, is in scope. A general-audience gaming app that happens to be popular with teenagers is in scope. A communication tool used by families is in scope. The question isn't what your marketing materials say — it's what your actual user base looks like.

Conduct this assessment honestly, document it, and revisit it when your product changes or your user base shifts. If you don't know the age distribution of your users, that's itself a signal worth acting on. Many products have built in minimum age requirements as a cost-free liability hedge without ever thinking about whether children access the product in practice, or what that means for design.

By product type:
- Products with no realistic child access path (enterprise B2B tools, financial services for verified adults, etc.): Low obligation, but document the assessment.
- General-audience consumer products (apps, games, social features, content platforms, marketplaces): Presume children are in your user base unless you have strong evidence otherwise. Apply the pillars below proportionally.
- Products with child-directed features or content (games, education, family apps, anything with cartoon characters or age-targeted content): Highest obligation. Assume full scope of every framework discussed in this guide.

ICO — Introduction to the Children's Code

The ICO's definitive overview of who is covered, what the 15 standards require, and how to assess whether your service is in scope. The starting point for any PM working through this for the first time.

2. Privacy and Data Practices: Children's Data Is Different

When children are in your user base, the baseline privacy principles from any general framework get elevated. Data you might collect routinely from adults — behavioral profiles, location history, engagement patterns, social graphs — carries a fundamentally different risk profile when the subject is a child. Children can't fully understand or consent to the downstream uses of their data, they're in formative stages of development, and mistakes they make online can follow them in ways that adult mistakes don't.

The practical implications for product are threefold. First, minimize more aggressively. If you wouldn't want to explain to a parent exactly what you collect from their 10-year-old and why, you probably shouldn't be collecting it. Second, children's data should not be shared with third parties for advertising or profiling purposes — the updated US COPPA rule now requires separate consent for any third-party sharing, full stop, not just a single consent checkbox at signup. Third, retention needs to be strictly limited. The OPC in Canada, the ICO in the UK, and the FTC in the US have all emphasized that children's data should be deleted when it is no longer needed for the purpose it was collected — which in practice means shorter retention windows and active deletion processes, not "keep everything indefinitely."

Sensitive categories deserve special attention. Location data, biometric data, health and mental health indicators, and communications content are particularly high-risk when collected from minors. Each of these requires explicit justification for collection and heightened protections if collected at all.

By data type:
- Functional account data (username, email for account recovery): Collect, with parental consent where required. Minimize granularity — ask for a year of birth, not a full birthdate, if that's all you need.
- Behavioral and analytics data (usage patterns, session data, engagement metrics): Apply the minimum necessary principle strictly. Aggregate and anonymize quickly. Do not build individual behavioral profiles of children.
- Location, biometrics, communications content: Default off. Collect only where the feature genuinely requires it, with explicit consent and strong justification.

FTC — Children's Online Privacy Protection Rule (COPPA)

The authoritative US source on COPPA obligations, including the 2025 rule updates. Essential reading if your product has US users under 13.

OPC Canada — Collecting from Kids: Ten Tips

Practical, plain-language guidance from Canada's Privacy Commissioner on what good data practices for children look like, grounded in PIPEDA obligations.

3. Age-Appropriate Design: The Best Interests of the Child as a Design Principle

The UK Children's Code introduced a concept that is now spreading globally: the "best interests of the child" as a primary design constraint. This means that when there's a conflict between what serves your product's commercial interests and what serves the child's wellbeing, the child's wellbeing wins. That's not a soft aspiration — it's the statutory standard against which the ICO assesses compliance, and it's the direction every other jurisdiction in your operating footprint is moving.

In product terms, this translates to a set of concrete design obligations. Default settings must be the most privacy-protective available — if a child doesn't actively change anything, they should be in the safest configuration. Geolocation should be off by default. Profiles should be private by default. Data sharing with third parties should be off by default. Nudge techniques — design choices that pressure children into sharing more data, extending their sessions, or making choices against their interests — are explicitly prohibited under the UK code and emerging US state frameworks.

The transparency requirement has a specific twist for children: privacy information must be communicated in age-appropriate language, at the right moment, in a way the child can genuinely understand. A standard adult privacy policy in dense legal prose doesn't satisfy this. The UK code expects layered, contextual, visually accessible privacy notices that match the developmental level of the intended user.

By design context:
- Default settings: Every privacy and safety setting should default to its most protective state for users you know or suspect are under 18. Never default toward data sharing, visibility, or engagement amplification.
- Consent and transparency: Age-appropriate communication is a real design requirement. If your privacy UI would confuse a 14-year-old, it needs to be redesigned — not reworded in a privacy policy nobody reads.
- Nudge techniques: Audit your product for any UI that pressures children to share more data, extend sessions, or make choices that benefit your business at their expense. Remove them. This includes guilt-framed opt-outs, hidden settings, and asymmetric accept/decline button designs.

ICO — Age Appropriate Design: A Code of Practice

The full UK Children's Code. The 15 standards are the clearest articulation anywhere of what age-appropriate design means in operational product terms. Required reading for any PM whose product reaches children in the UK.

4. Harmful and Addictive Design Features: This Is Now a Product Liability Question

The conversation around children's digital wellbeing shifted significantly in 2024 and 2025. It moved from a question of regulatory compliance to a question of product liability. Juries in the US awarded damages against Meta and Google in cases where plaintiffs argued that design features — algorithmic recommendation, infinite scroll, autoplay, like counts, intermittent reward mechanics — were designed to maximize engagement in ways that foreseeably harmed adolescent mental health. The legal framing is product liability: the platforms knew their design created foreseeable harm to a vulnerable population, and shipped it anyway.

This isn't an abstract legal risk for most PMs. But it is a signal about the direction of travel — and the design principles it points toward are good product practice regardless of litigation outcomes. Features that exploit the developmental vulnerabilities of adolescents (heightened sensitivity to social feedback, underdeveloped impulse control, susceptibility to reward loops) to drive engagement aren't just ethically questionable. They're increasingly legally precarious and they reliably generate the kind of press coverage that destroys brand trust with parents.

The emerging duty of care standard — codified in the US Kids Online Safety Act trajectory, the UK Online Safety Act, and state-level laws in California, New York, and Maryland — requires platforms to exercise "reasonable care" in designing features that affect minors. That standard will be interpreted through enforcement and litigation over the next several years. Getting ahead of it now means auditing your engagement mechanics for their effect on child users, not just their effect on aggregate metrics.

Features to audit:
- Algorithmic recommendation systems: What does the recommendation engine do when a minor is in a session? Does it amplify content that's harmful, distressing, or developmentally inappropriate? Consider limiting recommendation intensity and capping content categories for known or likely minor users.
- Infinite scroll and autoplay: These features remove natural stopping points and are specifically targeted by emerging state legislation. Consider session limits, natural break points, and opt-in rather than opt-out autoplay for users under 18.
- Social validation mechanics (likes, follower counts, public metrics): These features have the strongest evidence of association with mental health harm in adolescent girls ages 11–15. Consider whether they need to be present at all for minor users, or whether they can be modified (hidden counts, private-only engagement).
- Notifications and re-engagement: FOMO-based notifications, streak mechanics, and loss-aversion nudges are engagement tools that disproportionately affect developing minds. Apply stricter defaults and make them easy to turn off for users under 18.

TSPA — Safety by Design: What It Is & Why It Matters

The Trust & Safety Professional Association's framework for thinking about harmful features in product design, including how to model harm types and apply proportionate design responses.

Freshfields — A New Era for Child Online Privacy

A clear legal summary of the 2025 COPPA updates and Maryland Kids Code requirements, including what "duty of care" means in product practice. Useful for understanding the legal direction of travel without needing to read primary source legislation.

5. Age Assurance, Parental Controls, and User Rights: Giving Families the Tools They Need

Knowing who is on your platform is increasingly a product requirement, not just a compliance aspiration. The UK Online Safety Act mandates age assurance for services with content inappropriate for children. The updated COPPA rule explicitly encourages age verification as a good-faith compliance indicator. Several US states have passed or are pursuing social media age restrictions requiring platform-level verification. The OECD's 2025 study of 50 online services found that only 2 of them systematically assure age at account creation — a number that will not survive the current legislative cycle.

Age assurance is genuinely complex. The approaches range from low-friction methods (self-declaration, which is widely bypassed) to high-friction methods (government ID verification, which raises significant privacy concerns of its own). The proportionate approach, endorsed by the ICO, the OPC, and the FTC, is risk-based: apply the level of assurance that matches the risk your product poses to children. A platform with explicit content for adults needs robust verification. A general-interest consumer app for mixed audiences needs a credible signal, not necessarily a full identity document. The key principle: if you can't verify age, apply children's protections to all users in scope, or accept that you're building without the information you need.

Parental controls and children's user rights are the operational counterpart to age assurance. Where children are known users, parents should have meaningful tools — the ability to set limits, review activity, and manage account settings. Children themselves, as they approach adolescence, should have progressively more ability to manage their own privacy settings. And the updated COPPA rule now requires that children can easily delete their data and accounts — a user rights obligation that needs to be built into the product, not handled manually by a support team.

By risk level:
- Low risk (limited child access, no age-sensitive content): Basic self-declaration at signup. Apply minimum children's protections to all users as a default.
- Medium risk (general consumer product, likely child users): Implement credible age assurance at account creation. Build parental control features. Ensure children can delete accounts and data easily.
- High risk (platform with significant minor user base, addictive mechanics, social features, or age-sensitive content): Invest in robust age assurance appropriate to your risk profile. Implement full parental controls. Build a children's settings mode with the most protective defaults. Treat this as a core product investment, not a compliance expense.

ICO — Children's Code: Guidance and Resources

The ICO's full resource hub including their updated opinion on age assurance, case studies from platforms that have implemented the code, and practical guidance on specific challenge areas like edtech and news services.

OECD — Age Assurance Practices of 50 Online Services

The most comprehensive cross-platform review of how age assurance is actually being implemented (or not) in practice. Useful both as context for why regulators are pushing harder, and as a benchmark against which to assess your own practices.

Putting It Together

Children's online protection is the fastest-moving area of digital product regulation across every market your product operates in. The legal frameworks are tightening. The litigation risk is real and growing. And the reputational consequences of getting it wrong — a regulatory enforcement action, a data breach involving children's data, a jury finding that your design harmed a child — are in a different category from most other product risks.

But the underlying principle doesn't require a lawyer to state. Children are less able than adults to protect themselves from design that exploits their vulnerabilities. They can't fully understand data collection or its consequences. They're disproportionately affected by features built to maximize engagement at the expense of wellbeing. Designing products that work in their interests, rather than against them, is both the ethical baseline and increasingly the legal minimum.

The PMs who do this well treat it as a design challenge: how do I build something that a parent would be genuinely comfortable with their child using? That question — applied honestly at every sprint — is a more practical guide than any regulatory checklist.