
AI compliance laws: Global regulations every AI product builder should know
- Riya Thambiraj

- Buyer's Playbook
- Last updated on
Key Takeaways
AI regulation is not just an EU story - at least 15 countries have passed or proposed AI-specific laws as of 2026
The EU AI Act is the most prescriptive, but US state-level AI laws (Colorado, Illinois, NYC) create compliance obligations that catch many companies off guard
China has three separate AI regulations already in force covering algorithms, deepfakes, and generative AI
Most AI laws share common themes - transparency, human oversight, non-discrimination, and accountability - even when enforcement differs
Building AI products for global markets means tracking overlapping requirements across multiple jurisdictions simultaneously
In March 2024, Italy's data protection authority temporarily banned ChatGPT. The tool was back within a month after OpenAI made changes, but the message was clear: AI products that ignore local regulations get pulled from markets.
That wasn't an isolated event. By the end of 2025, more than 15 countries had passed or proposed AI-specific legislation. The EU AI Act gets the most attention, but AI regulation is happening on every continent. If you're building an AI product for any market beyond your home country, you're dealing with a patchwork of rules that change every quarter.
The OECD AI Policy Observatory tracks over 1,000 AI policy initiatives across 69+ countries, with 47 jurisdictions having formally endorsed the OECD AI Principles. The Stanford HAI 2025 AI Index found AI mentions in legislative proceedings across 75 major countries increased 21.3% in 2024 alone - a ninefold increase since 2016. In the US, 59 AI-related federal regulations were introduced in 2024, more than double the 25 from 2023.
This guide covers the major AI regulations across 10+ jurisdictions. Not legal theory. Practical requirements that affect how you build, deploy, and maintain AI products.
TL;DR
Who this applies to
This guide matters for you if:
You sell AI products to customers in multiple countries - each market brings its own compliance requirements
Your AI product makes or influences decisions about people - hiring, lending, insurance, healthcare, education
You use AI-generated content - deepfakes, synthetic media, and AI-generated text face specific rules in several countries
You process biometric data with AI - face recognition, voice identification, and behavioral biometrics are heavily regulated almost everywhere
You're planning international expansion - knowing the regulatory map before you launch saves months of rework
Even if you're only selling in one country today, your product architecture should account for the regulations you'll face tomorrow. Retrofitting compliance into an AI system costs 3-5x more than building it in from the start.
"We talk to a lot of founders who think AI compliance is a 'later' problem. It's not. If you're building a hiring AI or a credit-scoring tool today, the EU AI Act high-risk requirements will affect you - and the architecture decisions that satisfy those requirements need to be in place before your first enterprise customer, not after."
Ashit Vora, Captain at RaftLabs
The EU AI act: The global benchmark
The EU AI Act is the first law to regulate AI systems. It sets the standard that other countries reference, even when they take a different approach.
We've written a detailed EU AI Act compliance guide, so here's the summary that matters for global context.
Risk classification drives everything. The Act sorts AI systems into four tiers:
| Risk Level | What It Means | Examples | Requirements |
|---|---|---|---|
| Unacceptable | Banned outright | Social scoring, real-time biometric surveillance, manipulation of vulnerable groups | Prohibited since Feb 2025 |
| High Risk | Strict controls required | Hiring AI, credit scoring, medical devices, education assessment, law enforcement | Risk management, data governance, human oversight, conformity assessment |
| Limited Risk | Transparency required | Chatbots, AI-generated content, emotion recognition | Must disclose AI use to users |
| Minimal Risk | No specific requirements | Spam filters, AI-powered search, recommendation engines | Voluntary codes of conduct |
Key dates: Prohibited practices are already banned. AI literacy requirements apply from August 2025. Most high-risk obligations take effect August 2026. Regulated product sectors (medical devices, automotive) get until August 2027.
Penalties: Up to 35 million euros or 7% of global annual revenue for the worst violations.
Extraterritorial scope: Like GDPR, the EU AI Act applies to any AI product available to EU users, regardless of where the company is based.
United states: No single law, many moving parts
The US doesn't have a federal AI law. What it has is a layered system of executive orders, federal agency guidance, and state laws - all moving at different speeds.
Federal level
NIST AI Risk Management Framework (AI RMF): Published in January 2023, this is a voluntary framework organized around four functions:
Govern - establish AI risk management policies and accountability structures
Map - identify and document AI risks in context
Measure - assess and track identified AI risks
Manage - prioritize and act on AI risks
NIST AI RMF isn't law, but it matters. Federal agencies reference it in procurement requirements. Some state laws offer it as a compliance safe harbor. If you're building AI for government customers, expect NIST AI RMF alignment as a contract requirement.
Executive Order 14110 (October 2023): Required federal agencies to complete AI safety assessments, established AI safety testing standards for powerful models, and directed agencies to address AI risks in their sectors. Several resulting agency actions are still rolling out.
FTC enforcement: The Federal Trade Commission uses its existing consumer protection authority to act against AI harms. They've taken action against companies for AI-related deception, unfair practices, and discriminatory outcomes - without needing new AI-specific legislation.
State-level AI laws
This is where US AI regulation gets complex. Several states have passed AI-specific laws, and more are in progress.
Colorado SB 205 (AI Consumer Protection): Effective February 2026, this is the first US state law specifically regulating "high-risk AI systems." It requires:
Deployers to conduct impact assessments for high-risk AI decisions
Notice to consumers when AI makes or substantially contributes to consequential decisions
A right for consumers to appeal AI-driven decisions
Annual review of AI systems for discrimination
High-risk decisions include employment, education, financial services, healthcare, housing, insurance, and legal services.
Illinois Biometric Information Privacy Act (BIPA): The toughest biometric privacy law in the US. If your AI product processes biometric data (face recognition, fingerprints, voice patterns, iris scans) from Illinois residents, BIPA requires:
Written informed consent before collection
A published data retention and destruction policy
No selling or profiting from biometric data
BIPA has a private right of action, meaning individuals can sue directly. Settlements have reached hundreds of millions of dollars. Facebook (now Meta) settled a BIPA class action for $650 million in 2021.
NYC Local Law 144 (Automated Employment Decision Tools): If you sell AI hiring tools used in New York City, this law requires:
An annual independent bias audit of the AI system
Public posting of audit results on the employer's website
Notice to candidates that AI is being used in the hiring process
Penalties of $500-$1,500 per violation per day
ℹ️ The US patchwork is growing fast
Illinois BIPA already shows how costly state-level AI liability can be. Meta settled a BIPA class action for $650 million in 2021. TikTok settled for $92 million. These aren't edge cases - they're the cost of deploying biometric AI without proper consent architecture.
China: Three regulations already in force
China moved faster than most countries expected. Three AI-specific regulations are already active.
Algorithmic Recommendation Regulations (March 2022): Apply to any service that uses algorithms to recommend content or products to users in China. Requirements include:
Users must be told they're receiving algorithmic recommendations
Users can opt out of personalized recommendations
Algorithms can't create "information cocoons" (filter bubbles) or discriminate on price
Algorithms must not promote content that endangers national security or disrupts social order
Algorithm registration with the Cyberspace Administration of China (CAC) is required
Deep Synthesis Regulations (January 2023): Cover deepfakes, AI-generated images, video, audio, and text. Key requirements:
AI-generated content must be clearly labeled
Providers must verify the identity of users who create synthetic content
Deep synthesis services must be registered with regulators
Generated content that could cause confusion must carry visible watermarks
Generative AI Regulations (August 2023): Apply to any generative AI service available to Chinese users. Requirements include:
Training data must be legally obtained
AI-generated content must not violate Chinese laws (broad category)
Services must register with the CAC before public release
Providers must conduct security assessments
Users must be able to report problematic content
What this means for your product: If your AI product is available to users in China or processes data from Chinese users, all three regulations may apply simultaneously. The registration and approval requirements are particularly significant - you can't launch first and comply later.
United kingdom: Pro-innovation, not hands-off
The UK chose a different path after Brexit. Instead of prescriptive rules, it published a "pro-innovation" AI framework in March 2023 built on five principles:
- Safety, security, and robustness
- Transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
These principles aren't law on their own. Instead, existing regulators (FCA for finance, Ofcom for communications, CMA for competition, ICO for data) apply them within their sectors.
The AI Safety Institute (now rebranded as the AI Security Institute) conducts safety evaluations of frontier AI models and advises government on AI risks. It doesn't regulate companies directly, but its findings influence policy.
What's coming: The UK government has signaled it may introduce binding requirements for the highest-risk AI systems. The 2025 AI Opportunities Action Plan calls for a "statutory footing" for AI governance.
Practical impact: If you're operating in the UK, you won't face a single AI law like the EU AI Act. Instead, expect sector-specific regulators to apply AI principles to their existing authority. Financial AI gets FCA scrutiny. Healthcare AI gets MHRA and CQC attention. The obligations may be less codified, but they're real.
Canada: AIDA and the consumer privacy protection act
Canada's Artificial Intelligence and Data Act (AIDA) was introduced as Part 3 of Bill C-27. It's been through multiple revisions and committee reviews.
AIDA focuses on "high-impact AI systems" and would require:
Impact assessments before deploying high-impact AI
Risk mitigation measures
Transparency and explanation to affected individuals
Reporting serious harms to the AI and Data Commissioner
Penalties up to $10 million CAD or 3% of global revenue
Current status: AIDA's final form and timeline continue to evolve through Parliament. But the direction is clear - Canada is building toward mandatory AI requirements for high-impact systems.
Alongside AIDA: The Consumer Privacy Protection Act (CPPA), also part of Bill C-27, updates Canada's privacy framework with provisions relevant to AI - including automated decision-making transparency and the right to explanation.
The rest of the world: Key approaches
Brazil
Brazil's AI Bill (PL 2338/2023) passed the Senate in December 2024 and moves to the Chamber of Deputies. It follows a risk-based approach similar to the EU:
Rights-based framework with AI transparency requirements
High-risk AI systems need impact assessments
A new regulatory body (AI Supervisory Authority) to enforce the law
Emphasis on AI and human rights
Japan
Japan takes a governance-first approach rather than a regulation-first approach. The AI Guidelines for Business (updated 2024) are voluntary but widely followed. They cover:
Transparency and accountability
Safety and security
Fairness and non-discrimination
Privacy protection
Japan's approach favors industry self-regulation with government oversight, rather than prescriptive laws.
South korea
South Korea passed the AI Framework Act in January 2025, becoming one of the first Asian countries with dedicated AI legislation. Key features:
High-impact AI classification system
Mandatory impact assessments for high-impact AI
AI Ethics Committee establishment
Support for AI innovation alongside regulation
Singapore
Singapore's Model AI Governance Framework (first published 2019, updated since) is voluntary but influential across Southeast Asia. It provides practical guidance on:
Internal governance structures for AI
Human involvement in AI decision-making
Operations management for AI systems
Stakeholder interaction and communication
Singapore also launched AI Verify, an AI governance testing framework and toolkit, in 2022.
India
India doesn't have AI-specific legislation yet, but the Digital Personal Data Protection Act (2023) affects AI systems that process personal data. The government has indicated it prefers a "risk-based, user-harm" approach to AI regulation rather than prescriptive rules.
Global AI regulation: A comparison
| Jurisdiction | Approach | Status | Key Feature | Penalty |
|---|---|---|---|---|
| EU | Risk-based, prescriptive | In force (phased) | Four-tier risk classification | Up to 35M euros / 7% revenue |
| US (Federal) | Voluntary framework + agency enforcement | Active | NIST AI RMF + FTC enforcement | Varies by agency and statute |
| US (States) | State-by-state laws | Mixed (some active) | Colorado SB 205, BIPA, NYC LL 144 | $500-$1,500/day (NYC); billions in class actions (BIPA) |
| China | Prescriptive, content-focused | In force (3 laws) | Algorithm registration, content labeling | App removal, service suspension, fines |
| UK | Principles-based, sector regulators | Framework published | Five cross-sector principles | Sector-dependent |
| Canada | Risk-based legislation | In progress | High-impact AI focus | Up to $10M CAD / 3% revenue |
| Brazil | Rights-based, risk-tiered | Advancing in legislature | Human rights emphasis | TBD |
| Japan | Voluntary guidelines | Active | Industry self-regulation | No specific AI penalties |
| South Korea | Framework legislation | Passed Jan 2025 | High-impact AI classification | TBD (implementation pending) |
| Singapore | Voluntary governance framework | Active | AI Verify testing toolkit | No specific AI penalties |
| India | No AI-specific law yet | Data protection active | Digital Personal Data Protection Act | Up to 2.5B INR under DPDPA |
How global AI laws affect your product architecture
The regulations differ in specifics, but they converge on a set of technical requirements that should shape how you build AI products.
Transparency and disclosure
Almost every jurisdiction requires users to know when they're interacting with AI. This means:
AI disclosure in the UI - a clear indicator that content is AI-generated or that an AI system is involved in a decision
Audit trail of AI decisions - logged records of what the AI recommended, what data it used, and what decision resulted
Explainability - the ability to explain in plain language why the AI reached a particular output
Build these into your product from day one. They're not features you add later - they're architectural decisions about data logging, model monitoring, and user interface design.
Bias testing and fairness
High-risk AI systems face fairness requirements in the EU, US (Colorado, NYC), and most proposed laws. Your architecture needs:
Training data documentation - records of what data the model was trained on, how it was collected, and what biases were identified
Regular bias audits - periodic testing of AI outputs across protected categories (race, gender, age, disability)
Disparate impact analysis - statistical tests showing the AI doesn't produce discriminatory outcomes
Human oversight
Every major AI regulation requires human oversight for consequential decisions. Technically, this means:
Human-in-the-loop design - the ability for a human to review, override, or reject AI decisions before they take effect
Escalation paths - clear triggers that send AI decisions to human reviewers
Override logging - records of when and why humans overrode AI recommendations
Data governance
AI data governance requirements overlap heavily with privacy laws like GDPR and CCPA. Your AI system needs:
Data provenance tracking - where training data came from and whether you have rights to use it
Purpose limitation - data collected for one purpose can't be repurposed for AI training without consent
Data subject rights - users can request access to, correction of, and deletion of their data, including data used in AI models
What this costs
Building AI products with global compliance in mind adds 15-25% to a standard development budget. Here's where the money goes:
| Cost Area | Typical Range | What It Covers |
|---|---|---|
| Compliance assessment | $15,000-$50,000 | Map applicable regulations, gap analysis |
| AI risk documentation | $10,000-$30,000 | Risk assessments, impact assessments, technical documentation |
| Transparency features | $20,000-$60,000 | AI disclosure UI, explainability modules, audit trails |
| Bias testing infrastructure | $15,000-$45,000 | Testing pipelines, statistical analysis, audit reports |
| Human oversight systems | $10,000-$35,000 | Review interfaces, escalation logic, override logging |
| Ongoing compliance monitoring | $3,000-$8,000/month | Regulation tracking, periodic audits, documentation updates |
The cost of not doing it: EU AI Act fines reach 7% of global revenue. BIPA class actions have produced settlements exceeding $600 million. NYC LL 144 penalties accumulate daily. And the reputational cost of a public AI compliance failure - algorithmic bias in hiring, discriminatory lending, unauthorized surveillance - can exceed any fine.
"Bias testing isn't just a regulatory checkbox. We've seen AI hiring tools perform differently across demographic groups in ways the client never noticed during QA. The EU AI Act and Colorado SB 205 are forcing a practice that should have been standard from the start. If your AI makes consequential decisions, you need to know how it performs across protected groups before it goes live."
RaftLabs Engineering Team
Questions to ask your development partner
Before you hire a team to build an AI product for international markets, ask these questions:
-
Which AI regulations have you built products under? Look for direct experience with EU AI Act, NIST AI RMF, or relevant state laws - not just general compliance awareness.
-
How do you handle AI risk classification? A good partner maps your product's features to risk categories before writing code, not after.
-
What's your approach to bias testing? Ask for specifics: which statistical tests, how often, across which protected categories, and who reviews the results.
-
How do you build explainability into AI systems? The answer should cover model-level explainability (SHAP values, feature importance) and user-level explainability (plain-language explanations in the UI).
-
Can you show me your AI documentation templates? Companies that have done this before have standard templates for risk assessments, data governance documentation, and conformity assessment preparation.
-
How do you track regulatory changes across jurisdictions? AI regulation changes quarterly. Ask how they stay current and how they'll notify you when new requirements affect your product.
Your AI compliance checklist
Use this checklist before, during, and after development.
Before development
- ✓Identify every country/state where your AI product will be available
- ✓Map applicable AI regulations for each jurisdiction
- ✓Classify your AI system's risk level under the EU AI Act framework— Even if not serving EU - it's a useful baseline
- ✓Document your AI system's purpose, intended users, and decision scope
- ✓Check whether your AI use case triggers sector-specific rules— Healthcare, finance, employment, education
- ✓Engage legal counsel familiar with AI regulation in your target markets
During development
- ✓Build AI disclosure/transparency into the user interface
- ✓Implement audit trail logging for all AI-influenced decisions
- ✓Create a training data documentation system— Sources, collection methods, known biases
- ✓Build human oversight mechanisms - review, override, and escalation
- ✓Design bias testing into your CI/CD pipeline
- ✓Implement explainability features appropriate to your risk level
- ✓Document the AI system's technical design, training process, and performance metrics
Before launch
- ✓Complete risk assessments and impact assessments for applicable jurisdictions
- ✓Run bias audits across all protected categories relevant to your use case
- ✓Prepare user-facing documentation— AI transparency notices, privacy policies updated for AI
- ✓Register your AI system where required— China's CAC registration, EU database for high-risk systems
- ✓Conduct a conformity assessment if required (EU high-risk AI)
- ✓Verify compliance with any sector-specific requirements
After launch
- ✓Schedule regular bias audits— NYC requires annual; best practice is quarterly
- ✓Monitor AI system performance against documented accuracy standards
- ✓Track regulatory changes in every market you serve
- ✓Maintain incident reporting procedures for AI harms
- ✓Update documentation when the AI system changes
- ✓Conduct annual compliance reviews across all applicable jurisdictions
The compliance market is responding to the regulatory pressure. Gartner projects global AI governance platform spending will reach $492 million in 2026 and surpass $1 billion by 2030 - driven by AI regulation reaching 75% of the world's economies by that date. Companies building AI governance infrastructure now are positioning ahead of a spend wave that's already underway.
The direction is clear
Every major economy is moving toward AI regulation. The specific rules differ - prescriptive in the EU, sector-based in the US, content-focused in China, principles-based in the UK. But the direction is consistent: transparency, fairness, human oversight, and accountability.
If you're building an AI product today, treat compliance as a product feature, not a legal afterthought. The companies that build governance into their AI architecture now won't have to rebuild when the next regulation drops.
And given the pace of AI legislation globally, the next regulation is never far away.
Frequently Asked Questions
As of 2026, the EU AI Act is the most comprehensive, with prohibited practices already enforced and high-risk requirements phasing in through August 2026. China has three active AI regulations covering algorithmic recommendations (2022), deep synthesis/deepfakes (2023), and generative AI (2023). Several US states have passed AI laws - Colorado SB 205 (high-risk AI decisions), Illinois BIPA (biometric data), and NYC Local Law 144 (automated hiring tools). South Korea's AI Framework Act and Brazil's AI Bill are also moving through implementation.
Yes, if your AI product is available to EU users or your AI system's output is used in the EU. Like GDPR, the EU AI Act has extraterritorial scope. A US company whose AI-powered hiring tool screens candidates for an EU-based client must comply with the Act's requirements for that high-risk use case.
NIST AI RMF is a voluntary US federal framework for managing AI risks. It's organized around four functions - Govern, Map, Measure, Manage - and provides guidance on AI trustworthiness. While not legally binding on its own, several US agencies reference it in procurement requirements, and some state laws point to NIST AI RMF as a compliance safe harbor.
The EU takes a horizontal, risk-based approach - one law covering all AI systems classified by risk level. The US takes a sector-specific and state-by-state approach. There's no single federal AI law. Instead, existing agencies apply their authority (FTC for consumer protection, EEOC for employment, FDA for medical devices) while states pass their own AI-specific laws. The result is a patchwork that can be harder to track than the EU's single framework.
EU AI Act: up to 35M euros or 7% of global revenue for prohibited practices, 15M euros or 3% for high-risk violations. China: varies by regulation but includes app removal, service suspension, and fines. NYC Local Law 144: $500-$1,500 per violation per day. Colorado SB 205: enforced by the Attorney General under consumer protection authority. The financial risk varies widely, but the reputational and operational risks of non-compliance are consistent everywhere.
Not necessarily separate, but you need to track where requirements overlap and where they diverge. Building to the EU AI Act's high-risk requirements often covers a significant portion of other countries' requirements since it's the most prescriptive. But you'll still need country-specific additions - like China's algorithm registration requirement or NYC's bias audit mandate for hiring AI.
No single global standard exists yet, but the OECD AI Principles (adopted by 46 countries) and ISO/IEC 42001 (AI Management System standard) provide international baselines. Building your AI governance around these frameworks, then adding jurisdiction-specific requirements, is the most efficient approach for global products.

