Cross-border AI Compliance in Finance: What to expect across US, EU and China
The current state of AI in financial services
Artificial Intelligence (AI) is revolutionizing financial services (FS) at an unprecedented speed, powering innovations from algorithmic trading and real-time fraud detection to AI-driven credit scoring, robo-advisory, and personalized wealth management. According to KPMG’s Pulse of Fintech H1’2025 report, as of August 2025, global fintech investments have reached $44.7 billion in the first half of the year alone, marking a significant but tempered surge amid economic pressures. This investment underscores AI's role in enhancing operational efficiency, cutting costs, and delivering personalized customer experiences. Additionally, regtech solutions are gaining more relevance as companies look to automate manual processes while maintaining regulatory compliance across jurisdictions.
In this interconnected global market, FS firms are increasingly deploying AI across borders, but ethical dilemmas, data privacy breaches, and systemic vulnerabilities loom large. For businesses, this means not just adopting AI for competitive edge but ensuring it aligns with evolving regulatory landscapes to avoid reputational damage or financial penalties.
Why AI regulation matters for FS globally
Regulation in FS is key to maintaining economic stability, consumer trust, and fair markets. AI amplifies unique risks in this space as the implementation of this technology is still in early stages. Regulations aim to enforce transparency, curb discrimination, safeguard sensitive financial data, and promote innovation without undermining safety.
Divergent global approaches create complications for FS firms, especially those operating in high-growth corridors like EU-Asia. Every jurisdiction has unique rules and expectations, leading to unforeseen operational hurdles and delayed market launches. However, they also open doors: Compliant firms can leverage regulations as a differentiator, accessing new markets through cross-jurisdictional licensing.
Global regulatory approaches to AI in FS
In the current global context, different approaches towards AI regulations have emerged: The US favours a fragmented, innovation-first model; the EU pushes risk-based harmonization with human-centric safeguards; and China enforces centralized, security-driven controls. In FS, these intersect with sector-specific laws, demanding tailored compliance frameworks.
United States: Fragmented and innovation oriented
The US continues without a unified federal AI law, relying on existing statutes and agency enforcement. President Trump's January 2025 Executive Order emphasizes "America's global AI dominance" by rescinding Biden-era barriers, creating a permissive environment. Recent developments, such as the reintroduction of the Unleashing AI Innovation in Financial Services Act to promote AI in FS through regulatory sandboxes at federal financial regulatory agencies. Additionally, the US government has begun to take clear steps to support the private sector and accelerate AI innovation as proposed in America’s Action Plan.
What US regulations covers: Sector-specific applications, focusing on protecting users and consumers from inherent bias, and transparency. State laws, like Colorado's AI Act (effective 2026), mandate bias mitigation for high-risk AI in FS decisions. Exceptions: Voluntary guidelines for low-risk AI; no broad bans, but prohibitions on manipulative practices under existing laws.
Who it applies to: Developers (AI creators), deployers (FS firms using AI), and cross-border actors targeting US consumers.
Regulators: FTC (unfair AI practices), CFPB (FS consumer protection), OCC (banking), and DOJ.
Stage of implementation: Active federal enforcement via agencies; state rollouts in 2026 (e.g., California, Colorado).
To launch an AI product in US financial services, expect a fragmented regulatory landscape with no unified federal AI law, and having to rely instead on existing statutes, emphasizing bias mitigation, transparency, and consumer protection.
European Union: risk-based and harmonised
The EU AI Act, effective since August 2024 and fully applicable by 2026, sets a global benchmark with its risk-based framework. For FS, it aligns with GDPR and DORA, creating a robust framework with transparency and risk mitigation at its core.
What it covers: Prohibits unacceptable-risk AI (e.g., social scoring in lending); requires assessments, bias checks, transparency, and human oversight for high-risk FS AI. General-purpose models must disclose training data and respect IP. Exceptions: Low-risk AI (e.g., basic chatbots) faces minimal rules.
Who it applies to: Providers (developers), deployers (FS firms), importers/distributors, and cross-border actors if outputs affect the EU. Extraterritorial: Non-EU firms serving EU markets are in scope.
Regulators: EU AI Office (central), national authorities, and FS bodies like EBA/ECB. Member States appointed national authorities during August 2025.
Stage of implementation: Phased approach with application milestones throughout 2025 to August 2027. Full implementation of the act begins on the 2nd of August 2027.
Launching an AI product in EU financial services will require prioritising consumer trust and safety. For high-risk AI, such as automated loan approvals or insurance risk models, businesses should expect clear requirements, including bias audits, detailed documentation, and mandatory human oversight to prevent unfair outcomes. Issues to watch include navigating complex data privacy rules and ensuring cross-border compliance if sourcing AI globally but leveraging EU sandboxes can ease testing and build competitive trust.
China: Centralized and security-focused
China’s Interim Measures for Generative AI (effective August 2023) reflect a top-down, security-driven approach to regulation. While a July 2025 global governance plan signals openness to international cooperation, China’s domestic rules remain tightly controlled - especially in financial services (FS), where algorithm management and data security are overseen by specialised sector regulators.
What it covers:
Public-facing generative AI services, including chatbots and content creation tools. Obligations include:
Content labelling and dataset transparency
Use of lawful, unbiased training data
Prevention of illegal or harmful content (e.g., endangering national security)
FS-specific rules govern:
Algorithm vetting in asset management
AI use in credit scoring, trading, and financial advice
Exemptions: Internal R&D and non-public tools are excluded from the Measures, but firms must evaluate exposure based on function and reach.
Who it applies to:
Both domestic and foreign providers of generative AI services available in China. Also applies to FS firms using AI internally or in client-facing products. Notably, the Measures are extraterritorial, foreign companies targeting Chinese users must comply.
Regulators:
· Cyberspace Administration of China (CAC) – lead supervisory authority
· Sectoral FS agencies: People’s Bank of China (PBOC), NFRA, CSRC
· Coordination with data, education, and national security ministries
Stage of implementation:
In force since August 2023. Additional data security and content standards apply from November 2025. Security assessments are mandatory for models deemed high-impact (e.g. public mobilisation, economic influence).
Comparative Table
Jurisdiction | Key Approach & Coverage | Who It Applies To | Regulators | Implementation Stage | Key Considerations for FS AI Launch |
---|---|---|---|---|---|
United States | Fragmented, innovation-driven; uses existing laws to address bias, transparency, consumer protection in FS AI (e.g., credit scoring). State laws (e.g., Colorado AI Act, 2026) mandate bias mitigation for high-risk AI. No broad bans; prohibits manipulative practices. | Developers, FS firms, cross-border actors targeting US consumers. | FTC, CFPB, OCC, DOJ, state AGs. | Active federal enforcement; state rollouts in 2026 (e.g., Colorado, California). | Conduct bias audits, ensure transparency, navigate state-federal rules. Use proposed sandboxes to test AI systems safely. |
European Union | Risk-based EU AI Act (2024); bans unacceptable AI (e.g., social scoring); requires audits, transparency, human oversight for high-risk FS AI (e.g., loan approvals). Low-risk AI faces minimal rules; aligns with GDPR, DORA. | Providers, FS firms, importers, non-EU firms with EU outputs. | EU AI Office, EBA, ECB, national authorities. | Phased: Bans February 2025; full rules August 2027. | Prioritize consumer trust with bias audits, documentation, human oversight. Leverage sandboxes for testing and compliance. |
China | Centralized, security-focused; regulates generative AI with content labelling, bias prevention, data security. FS-specific: Algorithm vetting for credit scoring, trading. Exempts internal R&D. | Providers, FS users in China, foreign firms serving China. | CAC, PBOC, NFRA, CSRC. | In force August 2023; new standards November 2025. | Implement content labelling, security reviews. Ensure compliance with national security and data rules for public-facing AI. |
Key takeaways and interoperability challenges for FS
The US, EU, and China are charting markedly different paths in regulating AI within financial services.
In the US, regulation remains fragmented and innovation-led, with firms navigating overlapping federal guidance and emerging, which introduces mandatory bias audits and transparency controls.
The EU enforces a harmonised, risk-based regime under the AI Act (2024, with full application by 2027), banning harmful practices like social scoring and imposing strict obligations, such as algorithmic transparency, human oversight, and GDPR alignment, for high-risk FS use cases like credit underwriting.
China, meanwhile, maintains a security-first posture. The Interim Measures for Generative AI (2023) focus on public-facing generative AI services, requiring content labelling, legitimate data sourcing, and safeguards against discriminatory or harmful outputs. While internal R&D is excluded, compliance expectations are high for services accessible by the public.
For financial institutions operating across borders, these divergent frameworks present both risk and opportunity. Achieving compliance while enabling AI-driven growth calls for a jurisdiction-specific licensing strategy and proactive risk alignment.
Braithwate and FintechXpndr helps fintechs and financial institutions navigate this complexity, whether launching AI models in Europe, licensing FS tools in Asia, or designing governance frameworks across multiple regimes.
Get in touch to explore how we can support your next AI expansion.