On 1 August 2024, the European Artificial Intelligence Act (AI Act) enters into force. The Act aims to foster responsible artificial intelligence development and deployment in the EU.
Proposed by the Commission in April 2021 and agreed by the European Parliament and the Council in December 2023, the AI Act addresses potential risks to citizens’ health, safety, and fundamental rights. It provides developers and deployers with clear requirements and obligations regarding specific uses of AI while reducing administrative and financial burdens for businesses.
The AI Act introduces a uniform framework across all EU countries, based on a forward-looking definition of AI and a risk-based approach:
- Minimal risk: most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct.
- Specific transparency risk: systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled as such.
- High risk: high-risk AI systems such as AI-based medical software or AI systems used for recruitment must comply with strict requirements, including risk-mitigation systems, high-quality of data sets, clear user information, human oversight, etc.
- Unacceptable risk: for example, AI systems that allow “social scoring” by governments or companies are considered a clear threat to people's fundamental rights and are therefore banned.
The EU aspires to be the global leader in safe AI. By developing a strong regulatory framework based on human rights and fundamental values, the EU can develop an AI ecosystem that benefits everyone. This means better healthcare, safer and cleaner transport, and improved public services for citizens. It brings innovative products and services, particularly in energy, security, and healthcare, as well as higher productivity and more efficient manufacturing for businesses, while governments can benefit from cheaper and more sustainable services such as transport, energy and waste management.
Recently, the Commission has launched a consultation on a Code of Practice for providers of general-purpose Artificial Intelligence (GPAI) models. This Code, foreseen by the AI Act, will address critical areas such as transparency, copyright-related rules, and risk management. GPAI providers with operations in the EU, businesses, civil society representatives, rights holders and academic experts are invited to submit their views and findings, which will feed into the Commission's upcoming draft of the Code of Practice on GPAI models.
The provisions on GPAI will enter into application in 12 months. The Commission expects to finalise the Code of Practice by April 2025. In addition, the feedback from the consultation will also inform the work of the AI Office, which will supervise the implementation and enforcement of the AI Act rules on GPAI.
For more information
European Artificial Intelligence Act comes into force - press release
More about the European AI Act
Excellence and trust in artificial intelligence
Details
- Publication date
- 1 August 2024
- Author
- Directorate-General for Communication