Executive takeaways
- The Act applies to providers, deployers, importers, distributors, and product manufacturers when AI is placed on the EU market, used in the EU, or its output is used in the EU, including non-EU companies whose AI outputs are used in the Union.
- The framework rests on: (1) prohibited AI practices, (2) high-risk AI systems with prescriptive requirements, (3) transparency duties for specific non-high-risk uses, and (4) obligations for general-purpose AI (GPAI) models, with additional duties for GPAI models designated as having “systemic risk.”
- Enforcement includes substantial administrative fines up to EUR 35,000,000 or 7% of worldwide turnover for prohibited practices; other infringements carry up to EUR 15,000,000 or 3% (lower caps for SMEs).
Scope and roles
The Regulation covers EU and non-EU providers and deployers where outputs are used in the Union; importers, distributors, authorised representatives, product manufacturers placing AI with their products, and affected persons located in the Union are also included. National security, defence, and military uses are excluded.
Executives should identify their role(s) across the value chain. Distributors, importers, or deployers can become “providers” (and assume provider obligations) if they rebrand, substantially modify, or change the intended purpose such that an AI system becomes high-risk.
Risk structure in the Act
The Act defines:
- Prohibited AI practices – Article 5
- Behaviour manipulation causing harm; exploitation of vulnerabilities; social scoring.
- Biometric categorisation inferring sensitive data; emotion inference in workplaces or education; untargeted facial-recognition scraping.
- Real-time remote biometric identification in public (law enforcement exceptions only in narrowly defined cases).
- High-risk AI systems – Article 6 + Annex I/III
- AI used in safety components, or stand-alone in critical sectors such as infrastructure, education, employment, essential services, law enforcement, migration, justice, and democratic processes.
- Subject to strict obligations including conformity assessment, data quality, documentation, logging, oversight, robustness, and accuracy.
- Transparency obligations – Article 50
- Users must be informed when interacting with AI, exposed to biometric categorisation or emotion recognition, or viewing synthetic/AI-generated content (with limited exceptions).
- General-purpose AI (GPAI) models – Chapter V
- GPAI providers must ensure technical documentation and information sharing.
- GPAI with systemic risk face additional evaluation, cybersecurity, and reporting duties.
What high-risk providers must implement
High-risk providers must operate a quality management system, maintain technical documentation, ensure robust data governance, enable logging, provide clear instructions and human oversight, and guarantee robustness and cybersecurity. They must perform conformity assessments and register in the EU database where applicable.
Governance and enforcement
- Member States must designate competent authorities and single points of contact (by 2 August 2025).
- The EU AI Office oversees GPAI and systemic-risk supervision.
- National authorities enforce high-risk AI compliance.
- Fines: up to €35m / 7% global turnover for prohibited practices; up to €15m / 3% for other infringements; up to €7.5m / 1% for false information.
Timeline
- 2 February 2025 – Prohibited practices banned.
- 2 August 2025 – GPAI obligations apply.
- 2 August 2026 – High-risk AI requirements apply.
- 2 August 2027 onwards – Sector-specific obligations phase in.
- By 2030 – Certain large-scale IT and public-sector obligations.
Priority actions for executives
- Map AI systems and roles – confirm provider, deployer, importer, or distributor status.
- Screen for prohibitions – eliminate Article 5 uses.
- Plan for high-risk AI – establish documentation, risk management, and conformity procedures.
- For GPAI use/development – ensure transparency documentation and systemic-risk compliance if applicable.
- Update governance – assign responsibility, monitoring, and escalation processes before enforcement deadlines.
What Aqunama does
At Aqunama, we help you deploy the right AI for your business and ensure relevant compliance with regulatory requirements, including the EU AI Act.
Disclaimer: This article is for informational purposes only and does not constitute legal advice. It is intended to inform executives about the existence and scope of the EU Artificial Intelligence Act and to highlight that organisations using AI in certain ways may need to consult their legal advisors to determine how the regulation applies to them.
Still unsure how the AI will impact your business? We know where it will.
Get Your Free AI Consultation


