On August 1, 2025, the EU Artificial Intelligence Act officially entered into force. The regulation introduces a risk-based framework to govern the development and use of AI systems within the European Union. It categorizes AI systems into four tiers—Unacceptable, High-risk, Limited-risk, and Minimal-risk—each with distinct obligations or restrictions.
The Act applies to all providers and users of AI systems if the system is used in the EU, regardless of origin. It also introduces special provisions for General Purpose AI (GPAI) models like ChatGPT or Claude. Compliance deadlines will roll out progressively over the next 6 to 36 months.
Why does it matter?
- It creates the first comprehensive AI regulatory framework globally.
- Unacceptable-risk systems (e.g. social scoring, manipulative surveillance) are banned outright.
- High-risk systems face strict conformity requirements, affecting developers in health, education, finance, and justice.
- GPAI models must meet transparency and documentation requirements, including disclosure of training data sources.
- Non-compliance can lead to fines of up to 7% of global annual turnover or €35 million.
What’s next?
Key implementation milestones:
- By 2025–2026: GPAI codes of conduct and transparency rules take effect.
- By 2026–2027: Conformity assessments for high-risk systems become mandatory.
- New supervisory institutions like the European AI Office and national authorities will begin oversight and enforcement.
🔗 Source: AI Act High-Level Summary – artificialintelligenceact.eu, August 2025
Read our analysis of the new EU AI Act:
The Bureaucratic AI Act – 7% fines and 0% clarity
What sounds like a regulatory breakthrough may turn out to be a bureaucratic trap for small developers and AI innovators.