The EU Artificial Intelligence Act (EU AI Act) establishes a binding regulatory framework that formalises how AI systems must be designed, operated, governed, and evidenced in regulated environments. For organisations in financial services, insurance, and health and life sciences, the Act clarifies supervisory expectations for AI-enabled systems that influence regulated decisions, building on existing accountability, risk management, and governance obligations (EUR-Lex).
For regulated organisations, the EU AI Act requires that high-risk AI systems be operated under documented, auditable controls that demonstrate risk management, human oversight, and traceability in practice, not only in policy. Supervisory assessment will focus on operational evidence across the AI lifecycle rather than stated intent (European Parliament EPRS).
The EU AI Act entered into force on 1 August 2024, with obligations applying on a phased basis. Prohibited practices and AI literacy obligations became applicable from 2 February 2025. Cyber Risk GmbH The governance rules and obligations for general-purpose AI models, along with the confidentiality provisions under Article 78, became applicable on 2 August 2025. Europa Obligations for high-risk AI systems listed in Annex III, transparency requirements under Article 50, and measures in support of innovation apply from 2 August 2026. Europa Article 6(1) and its corresponding obligations, covering AI systems used as or embedded in products subject to EU harmonisation legislation requiring third-party conformity assessment, apply from 2 August 2027 (EUR-Lex).
The Act applies to organisations that place or deploy AI systems affecting individuals in the European Union, including non-EU entities operating into the EU market. High-risk AI systems, common in regulated industries, are subject to detailed requirements covering classification, governance, documentation, oversight, logging and monitoring. In supervisory contexts, the absence of evidence is typically treated as the absence of control. (DLA Piper, 2025).
In financial services, AI systems used to evaluate the creditworthiness of natural persons or establish their credit score are explicitly classified as high-risk under Annex III, with the exception of AI systems used for the purpose of detecting financial fraud. EU Artificial Intelligence Act
AI systems used for risk assessment and pricing in relation to natural persons in the case of life and health insurance are also classified as high-risk. EU Artificial Intelligence Act Other financial AI applications such as transaction monitoring and anti-money-laundering prioritisation are not explicitly listed in the same Annex III item, meaning their classification as high-risk will depend on intended purpose and deployment context rather than being presumed by default. Recital 58 of the Act further clarifies that AI systems designed for fraud detection in financial services, and those used for prudential purposes such as calculating capital requirements, should not be considered high-risk. Financial Crime
Supervisory scrutiny is expected to focus on traceability across decision pipelines that combine models, rules, data sources and human intervention, as well as alignment with established model risk management and validation frameworks. Regulators are also likely to examine escalation thresholds, override mechanisms and post-decision review processes. (European Banking Authority, 2025).
In insurance, AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance are explicitly classified as high-risk under Annex III. EU Artificial Intelligence Act
Other insurance AI applications such as underwriting outside of life and health, claims assessment and document processing are not explicitly listed in Annex III, meaning their classification will depend on whether they materially influence policyholder outcomes in a way that triggers high-risk obligations under Article 6.
Where AI systems do fall within scope, regulatory review is expected to concentrate on explainability at the individual decision level, governance of escalation and dispute handling, and monitoring for bias or outcome drift over time. Documentation enabling retrospective justification of outcomes will be a key evidentiary focus. (EIOPA, 2025).
In health and life sciences, AI systems intended to evaluate and classify emergency calls or to be used for emergency healthcare patient triage are explicitly classified as high-risk under Annex III. EU Artificial Intelligence Act
Clinical decision support systems that function as or within medical devices may also be classified as high-risk through the product and safety component route under Article 6(1), whose obligations apply from 2 August 2027. EU Artificial Intelligence Act Broader applications such as patient data driven analytics are not automatically high-risk and will depend on whether their intended purpose materially influences diagnostic or treatment decisions in a way that meets a listed high-risk trigger.
Where AI systems do fall within scope, supervisory expectations will prioritise robust data governance, clear delineation of professional responsibility, and documentation sufficient to support audit, investigation or clinical review of AI-supported outcomes. (European Medicines Agency, 2025).
The EU AI Act is a directly applicable regulation across EU Member States. Under Article 2, it applies to providers placing AI systems on the market or putting them into service in the Union regardless of whether they are established in the EU or in a third country, to deployers located within the Union, and to providers and deployers established in a third country where the output produced by the AI system is used in the Union. EU Artificial Intelligence Act This extraterritorial scope means that global groups, non-EU vendors and third country service providers are in scope where their AI system outputs reach individuals in the EU. (EUR-Lex).
Regulated enterprises most commonly act as deployers and remain accountable for how AI systems are used in practice, including adherence to operational safeguards, oversight requirements and documentation obligations, regardless of whether systems are developed internally or procured. (Allen & Overy, 2025).
The Act adopts a risk-based framework, with high-risk AI systems subject to the most extensive requirements. Classification depends on intended purpose and deployment context, including whether a system is used in regulated decision-making, access to essential services, or health-related domains as defined in Annex III. However, an AI system listed in Annex III is not automatically classified as high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making. Squire Patton Boggs The Act specifies that this includes systems performing narrow procedural tasks, enhancing human-generated results, identifying decision patterns or deviations without altering prior human assessments, and performing preparatory tasks for Annex III use cases. Squire Patton Boggs Where a provider determines that their system does not meet the high-risk threshold, documentation of this assessment is required and must be provided to national authorities when requested (EUR-Lex, Annex III).
Classification is an operational governance responsibility. Regulators expect classification decisions to be documented, periodically reviewed and consistently reflected in system controls, monitoring practices and oversight arrangements. (Clifford Chance, 2025).
High-risk AI systems are subject to requirements covering risk management, technical and operational documentation, transparency, human oversight, record-keeping, and monitoring. These requirements are designed to support traceability, accountability, and supervisory review (Hogan Lovells, 2025).
Supervisory assessments will examine whether these controls are embedded into routine operations and whether evidence can be produced without retrospective reconstruction. Static documentation without operational artefacts is unlikely to satisfy supervisory expectations (Simmons & Simmons, 2025).
A workable AI operating model requires integration of EU AI Act obligations into existing governance, risk, and delivery processes. This includes maintaining a central inventory of AI systems, applying standardised controls linked to risk classification, embedding compliance checkpoints into development and change workflows, and generating audit evidence as a by-product of normal operations.
Organisations seeking clarity on their current position can undertake a Free AI System Assessment with Cloud Combinator to evaluate alignment with EU AI Act risk categories and operational requirements. Alternatively, we have a complementary EU AI Act Readiness Checklist that can support your internal review of governance, monitoring, and documentation maturity.
European Union — Regulation (EU) 2024/1689 (Artificial Intelligence Act)
https://eur-lex.europa.eu/eli/reg/2024/1689/oj
European Parliament — EU AI Act: State of Play and Key Issues (EPRS, 2024–2025)
https://www.europarl.europa.eu/thinktank
DLA Piper — EU AI Act: Key Obligations for Deployers (2025)
https://www.dlapiper.com
Clifford Chance — High-Risk AI Systems under the EU AI Act (2025)
https://www.cliffordchance.com
EBA, EIOPA, EMA — Sectoral Supervisory Perspectives on AI (2025)
https://www.eba.europa.eu | https://www.eiopa.europa.eu | https://www.ema.europa.eu