1. Introduction
The development of artificial intelligence (AI) over recent decades has fundamentally transformed the way society functions. AI systems are now used in healthcare, education, finance, security, employment, the judiciary, and many other fields. Although these technologies bring numerous benefits, they also entail significant risks, particularly with regard to human rights, privacy, discrimination, and security. Recognizing the need for a clear and comprehensive regulatory framework, the European Union (EU) adopted the EU AI Act (Artificial Intelligence Act) – the world’s first comprehensive legal instrument regulating the development, use, and placing on the market of AI systems.
2. Concept and significance of the EU AI Act
The EU AI Act is a regulation of the European Union that establishes uniform rules for the development, placing on the market, and use of artificial intelligence systems within the EU. As a regulation, rather than a directive, it is directly applicable in all Member States without the need for additional national legislation.
The core idea of the EU AI Act is to enable the development of innovative AI technologies while simultaneously safeguarding citizens’ fundamental rights, democratic values, and legal certainty. The Act is based on a risk-management approach, meaning that not all AI systems are treated equally—the level of regulation depends on the potential risk a system poses to individuals and society.
The significance of the EU AI Act is reflected in several key aspects:
- it represents a global precedent in the regulation of AI;
- it increases public trust in AI technologies;
- it ensures legal clarity for companies and innovators;
- it strengthens the EU’s position as a leader in the ethical and responsible use of technology.
3. Objectives of the EU AI Act
The main objectives of the EU AI Act can be summarized as follows:
- Protection of fundamental rights – preventing discrimination, mass surveillance, and unethical uses of AI.
- Safety and reliability of AI systems – ensuring that systems operate in a predictable, accurate, and secure manner.
- Transparency – users must be informed when they are interacting with an AI system.
- Promotion of innovation – creating a regulatory framework that does not hinder technological development.
- A unified EU market – preventing the fragmentation of rules among Member States.
4. Definition of artificial intelligence in the EU AI Act
The EU AI Act provides a broad definition of artificial intelligence. Under the Act, an AI system is software developed using techniques such as machine learning, logic, statistics, or other similar methods, and which can generate outputs such as predictions, recommendations, or decisions that influence its environment.
This broad definition allows the regulation to cover both current and future AI technologies, thereby ensuring the long-term relevance of the Act.
5. Classification of AI systems according to risk level
One of the most important features of the EU AI Act is the classification of AI systems into four risk categories.
5.1. Unacceptable risk
AI systems that pose an unacceptable risk are completely prohibited. These are systems that seriously endanger fundamental rights and human dignity. Examples include:
- AI systems for social scoring of citizens;
- manipulative AI systems that influence human behavior without individuals’ awareness;
- certain forms of real-time biometric surveillance.
The prohibition of such systems demonstrates a clear ethical boundary that the EU is unwilling to cross.
5.2. High-risk AI systems
High-risk AI systems are permitted but subject to strict requirements. They are used in sensitive areas such as:
- employment and candidate selection;
- education and assessment;
- creditworthiness and financial services;
- medical diagnostics;
- judicial and law enforcement systems.
Because these systems can have serious consequences for individuals’ lives, the obligations imposed on their providers and users are particularly detailed.
5.3. Limited risk
AI systems with limited risk are primarily subject to transparency obligations. For example, chatbots or content-generation systems must clearly inform users that they are interacting with an AI system rather than a human.
5.4. Minimal or negligible risk
Most AI systems fall into this category, such as AI used in video games, image filters, or music recommendation systems. No additional legal obligations apply to these systems, and their use remains largely unrestricted.
6. Obligations for high-risk AI systems
The EU AI Act imposes a range of obligations on high-risk AI systems in order to ensure their safety, fairness, and reliability.
6.1. Risk management
Manufacturers must establish a risk management system that identifies, assesses, and mitigates potential harmful impacts of the AI system throughout its entire lifecycle.
6.2. Data quality
Data used to train AI systems must be relevant, representative, and free from bias. This requirement aims to prevent discrimination based on gender, race, ethnic origin, or other personal characteristics.
6.3. Technical documentation
Manufacturers are required to prepare detailed technical documentation that enables regulatory authorities to understand how the system operates and to verify its compliance with the law.
6.4. Transparency and user Information
Users of high-risk AI systems must be provided with clear information about how the system functions, its limitations, and its proper use.
6.5. Human oversight
One of the key principles of the EU AI Act is the human-in-the-loop approach. This means that humans must be able to monitor, intervene in, or deactivate AI systems when necessary.
6.6. Accuracy, robustness, and cybersecurity
AI systems must achieve an appropriate level of accuracy, be robust against errors, and be protected against misuse and cyberattacks.
7. Obligations for AI system providers and users
In addition to manufacturers, the EU AI Act also imposes obligations on other actors.
- AI system providers must ensure that their systems are compliant with the regulations before placing them on the market.
- AI system users must use AI in accordance with the instructions and must not misuse it.
- There is an obligation to report serious incidents or failures to the competent authorities.
8. Sanctions and supervision
The EU AI Act foresees high fines for violations of the regulations, which can amount to several percent of a company’s annual global turnover. The implementation of the act is monitored by national authorities and European institutions, including specialized AI supervisory bodies.
9. Impact of the EU AI Act on the future of AI
The EU AI Act will have a significant impact not only in Europe, but also globally. Many international companies will have to adapt their AI systems to European rules, which may lead to the creation of global standards.
Although there is criticism that regulation may slow down innovation, most experts believe that it will increase trust in AI in the long run and enable its sustainable development.
10. Conclusion
The EU AI Act represents a historic step in the regulation of artificial intelligence. Through a risk-based approach, the act manages to balance the need for innovation with the protection of citizens’ fundamental rights. A clear classification of AI systems, strict obligations for high-risk applications, and an emphasis on transparency and human oversight make the EU AI Act one of the most ambitious legal frameworks in the modern technological world.
At a time when artificial intelligence is increasingly affecting everyday life, the EU AI Act shows that it is possible to regulate technology in a way that is both ethical, responsible, and future-oriented.
Author: Aleksandar Sajic
