Prohibited AI Practices: What’s Banned Under the EU AI Act

Prohibited AI practices sit at the highest level of restriction under the EU AI Act. In practice, the Act treats these AI practices as unacceptable-risk because they threaten people’s health, safety, and fundamental rights. In this article, we explain what the EU bans, why it bans these practices, and what your organisation can do to stay compliant.

What Does “Prohibited AI” Mean?

The EU AI Act defines prohibited AI systems as those whose use is considered fundamentally incompatible with EU values such as human dignity, privacy, and non-discrimination. Therefore, the regulation prohibits organisations from placing these systems on the EU market or using them within the EU — regardless of purpose, safeguards, or user consent. Prohibited AI sits at the top of the Act’s four-tier risk model (unacceptable, high, limited, minimal). Importantly, it also triggers the strictest enforcement and the highest penalties under the regulation.

 

Overview: What Does the EU AI Act Prohibit?

The EU AI Act sets new standards for how organisations develop and use AI across the European Union. At the top of its risk-based framework, the Act places prohibited AI practices — AI uses that create an unacceptable risk to health, safety, and fundamental rights. As a result, the EU has banned these practices with effect from 2 February 2025.

In the sections below, we break down what the law prohibits, why it matters, and how your organisation can reduce risk and stay compliant.

 

Examples of Prohibited AI Practices 

1. Manipulative or Deceptive AI
AI systems that intentionally manipulate human behaviour, emotions, or vulnerabilities to cause harm. Examples include voice-based persuasion, emotional manipulation in children’s content, or nudging people into harmful actions. Why banned: The EU AI Act prohibits AI that “distorts human behaviour” or exploits users’ weaknesses in ways that may cause psychological or physical harm.

2. AI Used for Exploitation of Vulnerable Individuals
AI systems that target individuals or groups by exploiting vulnerabilities linked to age, disability, or socio-economic situation. Why banned: Exploitation undermines human dignity and conflicts with the EU’s fundamental rights framework.

3. Social Scoring Systems
AI systems that evaluate or rank individuals based on personal behaviour, socio-economic status, or predicted characteristics. Why banned: Social scoring can drive discrimination and undermine fairness — echoing real-world concerns about mass surveillance and social credit-style scoring systems.

4. Predictive Policing Based on Profiling or Behaviour
AI systems that predict criminal behaviour or potential criminality based on data patterns or personal traits. Why banned: These systems can reinforce bias, breach the presumption of innocence, and produce discriminatory outcomes.

5. Emotion Recognition in Workplaces or Education
AI systems that analyse facial expressions, voice, or body language to interpret emotions of employees or students. Why banned: Emotion recognition intrudes on privacy and dignity and can enable misuse in performance evaluation, recruitment, or classroom monitoring.

6. Real-Time Remote Biometric Identification in Public Spaces
AI systems that perform real-time facial recognition or biometric tracking in public spaces, especially in law enforcement contexts. Why banned: These systems threaten privacy and civil liberties. Some narrowly defined exceptions apply for law enforcement with prior judicial authorization (e.g., locating victims of serious crimes).The EU AI Act divides AI systems into four risk categories.
Understanding which category your system belongs to is one of the first step toward compliance.

 

What Organisations Should Do

Knowing which AI systems are prohibited under the EU AI Act is the first step toward responsible AI governance.
By eliminating prohibited use cases, your organisation protects both users and reputation — and ensures trust, compliance, and sustainable innovation.

Even if your organisation does not develop AI systems directly, it’s essential to verify that vendors or third-party solutions you use are compliant.

Follow the checklist below to make sure you comply with the EU AI Act gällande förbjudna AI system.  

Checklist: How to Avoid Prohibited AI Practices

1. Conduct an AI system inventory
First, map all AI tools currently in use and document their purpose, data sources, and users.

2. Screen for prohibited functionality
Next, check each system for prohibited functionality — including manipulation, biometric tracking in public spaces, emotion recognition in workplaces or education, and social scoring.

3. Strengthen procurement policies
Then, update procurement policies and require suppliers to confirm EU AI Act compliance and disclose relevant AI features and intended use.

4. Train employees
Finally, include “unacceptable risk” categories in AI literacy and ethics training, and ensure clear ownership for reviewing and approving AI use cases.

Z

Self-Assessment Tool

Taking these steps now will help you get started on your compliance journey. Do you need help or guidance? Use our Self-Assessment Tool to clarify your organisation’s role and risk level.

24 Apr, 2026