The European Union Artificial Intelligence Act (“EU AI Act”) is regarded as the first comprehensive and binding international legal framework governing the development, placing on the market, and use of artificial intelligence technologies. First proposed by the European Commission in April 2021, the regulation was formally adopted in 2024 following extensive negotiations and has since become the cornerstone of the European Union’s regulatory framework for artificial intelligence. The primary objective of the regulation is not to limit the economic and technological potential of AI, but rather to ensure that its development aligns with fundamental rights, safety, and the rule of law. In this context, the European Union has adopted a “risk-based approach,” establishing a differentiated legal structure based on the level of risk posed by AI systems, rather than subjecting all systems to a uniform regime. This structure categorizes AI systems into four main groups, each subject to varying degrees of legal obligations. The core aim of this approach is to avoid imposing unnecessary burdens on low-risk applications while ensuring strict oversight of systems that may affect human life and fundamental rights.
In this framework, the European Commission assumes a central role by publishing guidelines and practical examples for both high-risk and non-high-risk systems in order to ensure consistency in implementation. In addition, the Commission is empowered to amend the list of high-risk systems over time, taking into account technological developments, evolving use cases, and changing risk profiles.
The first category comprises systems that pose an “unacceptable risk.” These are applications deemed to be in direct conflict with the fundamental values of the European Union and are therefore entirely prohibited within the EU. This prohibition extends not only to their use, but also to their development and placement on the market. Systems that manipulate individuals through subliminal techniques, social scoring mechanisms, the creation of facial recognition databases through indiscriminate scraping of images from public spaces, and systems that infer sensitive attributes based on emotional analysis fall within this category. The legal rationale here is that such systems constitute a disproportionate interference with fundamental rights, failing even the basic proportionality test.
The second category consists of “high-risk AI systems”, which are subject to the most stringent regulatory obligations under the EU AI Act. The classification of high risk depends not only on the technology itself but also on the context in which it is used. For example, an AI system used in recruitment processes may be classified as high risk, while the same system could be considered low risk in a different context. This category includes AI applications used in areas that directly impact individuals’ lives, such as employment, education, healthcare, creditworthiness assessment, the management of critical infrastructure, and law enforcement. The obligations imposed on these systems are extensive. Providers and developers are required to establish a risk management system throughout the entire lifecycle of the AI system. The accuracy and quality of datasets must be ensured, and comprehensive technical documentation must be prepared and made available for inspection. Furthermore, systems must be traceable, decision-making processes must be auditable ex post, and effective human oversight must be guaranteed. In particular, the principle of “human oversight” emerges as a key legal safeguard, ensuring that AI does not function as the ultimate decision-maker.
The third category covers “limited-risk systems”, where the regulatory focus is primarily on transparency obligations. Users must be clearly informed when they are interacting with an AI system, a requirement that is particularly relevant for chatbots and AI-generated content systems. The clear labeling of deepfake content also falls within this scope. The underlying legal objective is to prevent deception and ensure transparency in digital interactions.
The fourth category includes “minimal-risk systems”. These are applications widely used in everyday life that do not have a significant impact on individuals’ rights. Examples include spam filters, recommendation algorithms, and AI systems used in video games. In this area, regulatory intervention is minimal. The European Union’s approach here is to avoid stifling innovation through unnecessary administrative burdens.
When examining the overall structure of the regulation, it becomes evident that it establishes not only a technical compliance framework but also a multi-layered governance model. The principles of transparency, safety, accountability, and human control form the foundation of this model. In particular, the requirement for human intervention in high-risk systems serves as a critical legal safeguard, preventing AI from evolving into a fully autonomous decision-making authority.
Finally, the EU AI Act is supported by a robust enforcement regime. Significant administrative fines are предусмотрed for non-compliance, calculated as a percentage of a company’s global turnover. As a result, the regulation extends beyond being merely an internal EU legal instrument and effectively sets a global standard for technology companies. This phenomenon is often referred to in the literature as the “Brussels Effect.”
Overall, the EU AI Act is not a prohibition-based regime aimed at restricting artificial intelligence; rather, it is a comprehensive legal framework that classifies AI technologies, imposes proportionate obligations based on risk levels, and governs them within the framework of fundamental rights. Through this regulation, the European Union has introduced a new normative paradigm, recognizing artificial intelligence not only as a field of technological advancement but also as an area of legal and ethical governance.
@Çağla BARUT
Let's Get Connected!