From lie detection tools tested at the borders to dialect recognition technologies, a multitude of new technologies is being used and tested on migrants, including asylum seekers, across Europe. A comprehensive research report by Derya Ozkul and AFAR maps and explores each practice in detail.
The EU’s Artificial Intelligence Act proposal categorises AI uses for immigration, asylum and border as high risk, but new technologies are already used in many aspects of migration and asylum ‘management’ beyond imagination. To be able to reflect on the AI Act proposal, we first need to understand what current uses are, but this information is not always publicly available.
The new report by the Algorithmic Fairness for Asylum Seekers and Refugees (AFAR) project shows the multitude of uses of new technologies across Europe at the national and the EU levels. In particular, the report explores in detail the use of forecasting tools, risk assessment and triaging systems, processing of short- and long-term residency and citizenship applications, document verification, speech and dialect recognition, distribution of welfare benefits, matching tools, mobile phone data extraction and electronic monitoring, across Europe. It highlights the need for transparency and thorough training of decision-makers, as well as the inclusion of migrants’ interests in the design, decision, and implementation stages.
Based on her 12-month research at the University of Oxford, lead researcher Dr Derya Ozkul said, ‘not all uses of new technologies are necessarily bad. Matching tools, for example, can help match asylum seekers to places they wish to live. We therefore need to analyse each technology on its own, considering the context in which it was developed, its design properties, and how it is being used.’
Very few uses of new technologies are developed to benefit migrants, including asylum seekers and refugees, above anyone else. For example, Latvia has created a speech recognition tool that is designed to help migrants with their citizenship applications. Several matching tools aim to help to match migrants and asylum seekers with states and municipalities that are best suited for their integration. But many others are not designed with migrants’ needs in mind.
Dr Ozkul noted, ‘currently, new technologies are seen as a silver bullet to complex problems in public administration. Many technologies have been introduced in the immigration and asylum area without really understanding their impact. Who benefits from them, who has access to their details, and who is excluded remain key questions.’
While some technologies can help expedite the process of decision-making, they also bring many risks for human rights violations. The aim of the Algorithmic Fairness for Asylum Seekers and Refugees project is to explore the implications of these practices for human rights and whether existing legal standards can respond to emerging issues in automated decision-making processes. At its heart, the project team is exploring what a fair migration and asylum system means and how it can be institutionalised.
Read the report 'Automating Immigration and Asylum' here (pdf).