Providers of high-risk AI systems shall among other things:

  • Put in place a risk management system during the entire lifecycle of the high-risk AI system.
     
  • Apply data management and ensure that training, validation and test data sets are relevant, sufficiently representative and, as far as possible, free from errors and fully consistent with the intended purpose.
     
  • Prepare technical documentation to demonstrate compliance and provide the authorities with information enabling them to assess that compliance.
     
  • Design their high-risk systems in such a way that they can automatically record events that are relevant for the identification of risks at national level and substantial modifications during the lifecycle of the system.
     
  • Design their high-risk systems in such a way that they reach the right levels of accuracy, robustness and cybersecurity.
     
  • Provide instructions of use to downstream users to ensure that they comply with the requirements.
     
  • Establish a quality management system to ensure compliance.
     
  • Design their AI systems in such a way that deployment organisations can implement human oversight.

Should high-risk AI systems be used for network management, automation of the customer service or fraud detection in the telecommunication sector, they must comply with the above-mentioned requirements.

 

Back to top