The AI Act divides AI systems into four categories depending on their risk levels:

  • Systems with unacceptable risk: AI systems representing a manifest threat to safety and human rights are prohibited. These include systems that deploy subliminal techniques or that exploit vulnerabilities of specific groups, but also the social scoring by authorities.
     
  • Systems with high risk: AI systems that have a significant impact on the rights or safety of individuals are subject to strict requirements. Examples are biometric identification systems, critical infrastructure management and IA applications in healthcare.
     
  • Systems with limited risk: AI systems with limited risk are subject to specific transparency obligations such as informing users that they are interacting with an AI system. This category includes chatbots and AI for customer service.
     
  • Systems with minimal risk: AI systems with minimal or no risks are largely exempt from regulation. This category includes most AI applications, such as spam filters and AI-powered games.

 

Back to top