We urge MEPs to prevent harmful and discriminatory AI systems in migration

We just sent this appeal to Malta’s MEPs, urging them to prevent the harmful use of AI systems in the context of migration.

On 14 June, the European Parliament will vote on the EU Artificial Intelligence Act (AI Act). On behalf of the 200 organisations supporting the #ProtectNotSurveil campaign, we call on MEPs to ensure equal protection to migrants and people on the move, preventing harm from the use of AI systems in the migration context.  

For the AI act to protect the rights of migrants and people on the move, there must be a comprehensive list of AI systems that pose an ‘unacceptable’ risk to fundamental rights (Article 5) in the context of migration.

We ask MEPs to include the following AI systems as prohibited in Article 5:

  • Predictive analytic systems when used to interdict, curtail and prevent migration: AI-based systems to predict migration flows hold a serious risk of leading to punitive migration responses, such as violence at the borders and push-backs.
  • AI-based individual risk assessment and profiling systems in the migration contexts drawing on personal and sensitive data: The use of automated risk assessment and profiling systems during migration procedures is a dangerous practice that poses a serious threat to the right to non-discrimination, both directly and indirectly.


These recommendations are supported by 200 experts and civil society organisations calling for the EU AI Act to protect people on the move.

For more information, find our detailed recommendations, and this video: ‘Nowhere to turn: How surveillance tech at the EU borders is endangering lives’ outlining the use of AI-based technologies at and within the EU borders.