{"id":1624,"date":"2026-02-06T09:35:00","date_gmt":"2026-02-06T08:35:00","guid":{"rendered":"https:\/\/advokatskafirmasajic.com\/blog\/?p=1624"},"modified":"2026-03-05T09:36:12","modified_gmt":"2026-03-05T08:36:12","slug":"eu-ai-act-concept-objectives-and-obligations","status":"publish","type":"post","link":"https:\/\/advokatskafirmasajic.com\/blog\/eu-ai-act-concept-objectives-and-obligations\/","title":{"rendered":"EU AI Act \u2013 Concept, Objectives, and Obligations"},"content":{"rendered":"\n<p><strong>1. Introduction<\/strong><\/p>\n\n\n\n<p>The development of artificial intelligence (AI) over recent decades has fundamentally transformed the way society functions. AI systems are now used in healthcare, education, finance, security, employment, the judiciary, and many other fields. Although these technologies bring numerous benefits, they also entail significant risks, particularly with regard to human rights, privacy, discrimination, and security. Recognizing the need for a clear and comprehensive regulatory framework, the European Union (EU) adopted the <strong>EU AI Act (Artificial Intelligence Act)<\/strong> \u2013 the world\u2019s first comprehensive legal instrument regulating the development, use, and placing on the market of AI systems.<\/p>\n\n\n\n<p><strong>2. Concept and significance of the EU AI Act<\/strong><\/p>\n\n\n\n<p>The EU AI Act is a <strong>regulation of the European Union<\/strong> that establishes uniform rules for the development, placing on the market, and use of artificial intelligence systems within the EU. As a regulation, rather than a directive, it is directly applicable in all Member States without the need for additional national legislation.<\/p>\n\n\n\n<p>The core idea of the EU AI Act is to enable the development of innovative AI technologies while simultaneously safeguarding citizens\u2019 fundamental rights, democratic values, and legal certainty. The Act is based on a risk-management approach, meaning that not all AI systems are treated equally\u2014the level of regulation depends on the potential risk a system poses to individuals and society.<\/p>\n\n\n\n<p>The significance of the EU AI Act is reflected in several key aspects:<\/p>\n\n\n\n<ul><li>it represents a global precedent in the regulation of AI;<\/li><li>it increases public trust in AI technologies;<\/li><li>it ensures legal clarity for companies and innovators;<\/li><li>it strengthens the EU\u2019s position as a leader in the ethical and responsible use of technology.<\/li><\/ul>\n\n\n\n<p><strong>3. Objectives of the EU AI Act<\/strong><\/p>\n\n\n\n<p>The main objectives of the EU AI Act can be summarized as follows:<\/p>\n\n\n\n<ol type=\"1\"><li><strong>Protection of fundamental rights<\/strong> \u2013 preventing discrimination, mass surveillance, and unethical uses of AI.<\/li><li><strong>Safety and reliability of AI systems<\/strong> \u2013 ensuring that systems operate in a predictable, accurate, and secure manner.<\/li><li><strong>Transparency<\/strong> \u2013 users must be informed when they are interacting with an AI system.<\/li><li><strong>Promotion of innovation<\/strong> \u2013 creating a regulatory framework that does not hinder technological development.<\/li><li><strong>A unified EU market<\/strong> \u2013 preventing the fragmentation of rules among Member States.<\/li><\/ol>\n\n\n\n<p><strong>4. Definition of artificial intelligence in the EU AI Act<\/strong><\/p>\n\n\n\n<p>The EU AI Act provides a broad definition of artificial intelligence. Under the Act, an AI system is software developed using techniques such as machine learning, logic, statistics, or other similar methods, and which can generate outputs such as predictions, recommendations, or decisions that influence its environment.<\/p>\n\n\n\n<p>This broad definition allows the regulation to cover both current and future AI technologies, thereby ensuring the long-term relevance of the Act.<\/p>\n\n\n\n<p><strong>5. Classification of AI systems according to risk level<\/strong><\/p>\n\n\n\n<p>One of the most important features of the EU AI Act is the classification of AI systems <strong>into four risk categories.<\/strong><\/p>\n\n\n\n<p><strong>5.1. Unacceptable risk<\/strong><\/p>\n\n\n\n<p>AI systems that pose an <strong>unacceptable risk<\/strong> are completely prohibited. These are systems that seriously endanger fundamental rights and human dignity. Examples include:<\/p>\n\n\n\n<ul><li>AI systems for social scoring of citizens;<\/li><li>manipulative AI systems that influence human behavior without individuals\u2019 awareness;<\/li><li>certain forms of real-time biometric surveillance.<\/li><\/ul>\n\n\n\n<p>The prohibition of such systems demonstrates a clear ethical boundary that the EU is unwilling to cross.<\/p>\n\n\n\n<p><strong>5.2. High-risk AI systems<\/strong><\/p>\n\n\n\n<p>High-risk AI systems are permitted but subject to strict requirements. They are used in sensitive areas such as:<\/p>\n\n\n\n<ul><li>employment and candidate selection;<\/li><li>education and assessment;<\/li><li>creditworthiness and financial services;<\/li><li>medical diagnostics;<\/li><li>judicial and law enforcement systems.<\/li><\/ul>\n\n\n\n<p>Because these systems can have serious consequences for individuals\u2019 lives, the obligations imposed on their providers and users are particularly detailed.<\/p>\n\n\n\n<p><strong>5.3. Limited risk<\/strong><\/p>\n\n\n\n<p>AI systems with limited risk are primarily subject to transparency obligations. For example, chatbots or content-generation systems must clearly inform users that they are interacting with an AI system rather than a human.<\/p>\n\n\n\n<p><strong>5.4. Minimal or negligible risk<\/strong><\/p>\n\n\n\n<p>Most AI systems fall into this category, such as AI used in video games, image filters, or music recommendation systems. No additional legal obligations apply to these systems, and their use remains largely unrestricted.<\/p>\n\n\n\n<p><strong>6. Obligations for high-risk AI systems<\/strong><\/p>\n\n\n\n<p>The EU AI Act imposes a range of obligations on high-risk AI systems in order to ensure their safety, fairness, and reliability.<\/p>\n\n\n\n<p><strong>6.1. Risk management<\/strong><\/p>\n\n\n\n<p>Manufacturers must establish a risk management system that identifies, assesses, and mitigates potential harmful impacts of the AI system throughout its entire lifecycle.<\/p>\n\n\n\n<p><strong>6.2. Data quality<\/strong><\/p>\n\n\n\n<p>Data used to train AI systems must be relevant, representative, and free from bias. This requirement aims to prevent discrimination based on gender, race, ethnic origin, or other personal characteristics.<\/p>\n\n\n\n<p><strong>6.3. Technical documentation<\/strong><\/p>\n\n\n\n<p>Manufacturers are required to prepare detailed technical documentation that enables regulatory authorities to understand how the system operates and to verify its compliance with the law.<\/p>\n\n\n\n<p><strong>6.4. Transparency and user Information<\/strong><\/p>\n\n\n\n<p>Users of high-risk AI systems must be provided with clear information about how the system functions, its limitations, and its proper use.<\/p>\n\n\n\n<p><strong>6.5. Human oversight<\/strong><\/p>\n\n\n\n<p>One of the key principles of the EU AI Act is the human-in-the-loop approach. This means that humans must be able to monitor, intervene in, or deactivate AI systems when necessary.<\/p>\n\n\n\n<p><strong>6.6. Accuracy, robustness, and cybersecurity<\/strong><\/p>\n\n\n\n<p>AI systems must achieve an appropriate level of accuracy, be robust against errors, and be protected against misuse and cyberattacks.<\/p>\n\n\n\n<p><strong>7. Obligations for AI system providers and users<\/strong><\/p>\n\n\n\n<p>In addition to manufacturers, the EU AI Act also imposes obligations on other actors.<\/p>\n\n\n\n<ul><li>AI system providers must ensure that their systems are compliant with the regulations before placing them on the market.<\/li><li>AI system users must use AI in accordance with the instructions and must not misuse it.<\/li><li>There is an obligation to report serious incidents or failures to the competent authorities.<\/li><li>&nbsp;<\/li><\/ul>\n\n\n\n<p><strong>8. Sanctions and supervision<\/strong><\/p>\n\n\n\n<p>The EU AI Act foresees high fines for violations of the regulations, which can amount to several percent of a company\u2019s annual global turnover. The implementation of the act is monitored by national authorities and European institutions, including specialized AI supervisory bodies.<\/p>\n\n\n\n<p><strong>9. Impact of the EU AI Act on the future of AI<\/strong><\/p>\n\n\n\n<p>The EU AI Act will have a significant impact not only in Europe, but also globally. Many international companies will have to adapt their AI systems to European rules, which may lead to the creation of global standards.<\/p>\n\n\n\n<p>Although there is criticism that regulation may slow down innovation, most experts believe that it will increase trust in AI in the long run and enable its sustainable development.<\/p>\n\n\n\n<p><strong>10. Conclusion<\/strong><\/p>\n\n\n\n<p>The EU AI Act represents a historic step in the regulation of artificial intelligence. Through a risk-based approach, the act manages to balance the need for innovation with the protection of citizens&#8217; fundamental rights. A clear classification of AI systems, strict obligations for high-risk applications, and an emphasis on transparency and human oversight make the EU AI Act one of the most ambitious legal frameworks in the modern technological world.<\/p>\n\n\n\n<p>At a time when artificial intelligence is increasingly affecting everyday life, the EU AI Act shows that it is possible to regulate technology in a way that is both ethical, responsible, and future-oriented.<\/p>\n\n\n\n<p>Author: Aleksandar Sajic<\/p>\n","protected":false},"excerpt":{"rendered":"<p>1. Introduction The development of artificial intelligence (AI) over recent decades has fundamentally transformed the way society functions. AI systems are now used in&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[91,82],"tags":[],"_links":{"self":[{"href":"https:\/\/advokatskafirmasajic.com\/blog\/wp-json\/wp\/v2\/posts\/1624"}],"collection":[{"href":"https:\/\/advokatskafirmasajic.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/advokatskafirmasajic.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/advokatskafirmasajic.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/advokatskafirmasajic.com\/blog\/wp-json\/wp\/v2\/comments?post=1624"}],"version-history":[{"count":1,"href":"https:\/\/advokatskafirmasajic.com\/blog\/wp-json\/wp\/v2\/posts\/1624\/revisions"}],"predecessor-version":[{"id":1625,"href":"https:\/\/advokatskafirmasajic.com\/blog\/wp-json\/wp\/v2\/posts\/1624\/revisions\/1625"}],"wp:attachment":[{"href":"https:\/\/advokatskafirmasajic.com\/blog\/wp-json\/wp\/v2\/media?parent=1624"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/advokatskafirmasajic.com\/blog\/wp-json\/wp\/v2\/categories?post=1624"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/advokatskafirmasajic.com\/blog\/wp-json\/wp\/v2\/tags?post=1624"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}