Faculty of law blogs / UNIVERSITY OF OXFORD

Criminalisation at European Borders and the Role of Artificial Intelligence

Posted:

Time to read:

4 Minutes

Author(s):

Dr Foteini Kalantzi

Guest post by Dr Foteini Kalantzi. Dr Foteini Kalantzi is a Research Associate at the School of Politics and International Relations, QMUL. She has been an Assistant Professor at the Department of Politics and International Relations at the American College of Greece, and A.G.Leventis Research Fellow at St. Antony’s College, University of Oxford.

A person stands in front of a huge screen with vertical blue and purple lines of light
AI-driven tools entrench the securitisation of migration while posing ethical, legal, and political challenges. Image: Unsplash

Over the past three decades, European border governance has shifted from facilitating mobility within the Schengen Area to managing perceived threats and criminalising migration. Scholars such as Bosworth highlight how states and the EU increasingly treat border-crossing as a potential crime, evident in detention expansion, surveillance intensification, and militarisation in the Mediterranean. Central to this transformation is the rise of artificial intelligence (AI). AI-driven tools—predictive analytics, biometric databases, and automated surveillance—entrench the securitisation of migration while posing ethical, legal, and political challenges. By embedding existing biases within seemingly neutral systems, AI reinforces racialised exclusion and normalises constant monitoring.  

This piece examines the historical roots, technological emergence, and socio-political consequences of AI’s role in Europe’s border criminalisation. It looks at how AI-driven border infrastructures reconfigure the modalities through which migration is governed, surveilled, and disciplined. It shows how the convergence of securitisation and criminalisation is mutually reinforcing: the criminalisation of migration provides the normative and legal justification for intrusive surveillance and detention practices, while securitisation offers the epistemic and political framework that legitimises the deployment of advanced technologies under the guise of efficiency and safety. 

Criminalisation at borders in Europe 

Since the early 1990s, European border governance has increasingly criminalised migration. Following the Schengen Agreement and post-Soviet shifts, the EU liberalised internal mobility while externalising controls through readmission agreements and cooperation with states like Libya and Turkey. Discursively, policymakers, media, and security actors linked irregular migration to crime and terrorism, producing what Didier Bigo terms a “security continuum” and fostering a phenomenon termed crimmigration (detailed in the work of Juliet Stumpf). 

Operationally, this trend appears in detention, criminalisation of irregular entry, and militarised border control. Frontex exemplifies this shift: once a coordinating body, it now wields its own assets, intelligence, and a near-billion-euro budget. Tasked with surveillance, risk analysis, and returns, it frames the migrant body as a criminalised, governable subject. 

The rise of AI in border management 

Within this securitised context, AI becomes integrated into established infrastructures: relying on automated matching algorithms, Eurodac stores fingerprints of asylum seekers, and the Schengen Information System (SIS II) tracks alerts across member states. Systems like the Entry/Exit System (EES) and the European Travel Information and Authorisation System (ETIAS) are expanding the scope of biometric surveillance, processing millions of data points automatically. These examples exemplify this turn from fixed procedural rules to dynamic risk management models. These tools do not merely confirm compliance with predefined criteria: they estimate the likelihood of threat or irregularity, operationalising uncertainty as calculable risk. 

The EU AI Act acknowledges that AI systems used in migration, asylum and border control management are high-risk because such systems can significantly influence whether someone can enter a country, obtain protection, or have rights recognised. The Act says that these systems need to meet high standards of accuracy, non-discrimination, transparency, and respect for private and personal data rights. Nevertheless, it fails migrants by excluding harmful systems like biometric surveillance, predictive analytics, and AI lie detectors from its prohibitions.  

Risk profiling now defines automated border control, using algorithms to analyse data and behaviours to pre-emptively identify “risky” travellers. Coupled with drones, sensors, and satellites, AI enhances surveillance systems like EUROSUR, which employs machine learning to detect and classify migrant movements before reaching Europe. Projects like iBorderCtrl, using AI-based lie detection from micro-expressions and voice cues, illustrate efforts to automate human judgement, despite ethical and scientific criticism, reflecting ambitions to mechanise migration decision-making. 

Ethical, legal and political implications 

The use of AI in border governance raises major ethical and political concerns, especially regarding bias, accountability, and surveillance. Trained on biased data, AI can label certain groups as “high risk”, reproducing discrimination. The MIT Gender Shades study shows facial recognition error rates above one in three for darker-skinned women, versus near-perfect accuracy for lighter men: biases that, in border contexts, risk wrongful detention, denial, or deportation. 

ETIAS employs a screening rules algorithm, which uses automation and data matching, to profile visa-exempt travellers by cross-checking personal data against EU security databases to produce predictive risk scores. As it has been argued, ETIAS could gradually shift from transparent, rule-based systems toward semi-automated profiling that resembles AI decision-making, fostering overreliance on algorithmic outputs and eroding meaningful human oversight. Legal literature questions its clarity, foreseeability, and oversight, highlighting a shift from rule-based governance to probabilistic risk management that normalises pre-emptive suspicion before any offence occurs. 

Furthermore, under Regulation 2024/1358, Eurodac has evolved from a narrowly defined asylum database into a powerful apparatus for migration surveillance and control. Once designed to identify asylum applicants, it now collects biometric data from six migrant categories and links multiple records to track individuals across EU territory. 

EU asylum systems using fingerprint databases like EURODAC have produced more frequent mismatches for individuals with worn or damaged prints, common among manual labourers and refugees. These technical errors can translate into serious administrative and legal consequences: travellers may face delays, additional questioning, or denial of entry, and asylum seekers can experience prolonged registration processes, wrongful detention, or even deportation due to mistaken identity. 

Accountability for AI and biometric technologies at Europe’s borders is fragmented across a complex network of actors: EU agencies like Frontex and eu-LISA, national authorities, and private contractors. This diffusion of responsibility creates gaps in oversight of algorithms, biometric data, and rights protection. Frontex directs operations, eu-LISA manages databases such as EURODAC and VIS, while states and vendors handle data and systems under opaque procurement frameworks. Consequently, errors or rights violations lack clear accountability. Embedding AI in border infrastructures normalises perpetual surveillance, casting migrants as enduring suspects and extending these technologies into broader policing and security domains across European societies. 

Resistance and alternatives 

There is growing resistance to the AI-driven securitisation and criminalisation at borders. Civil society organisations like Statewatch and Privacy International have criticised EU funding for AI surveillance projects and raised awareness of human rights implications. Legal challenges have also been brought before the European Court of Justice (CJEU), particularly concerning data protection and privacy under the General Data Protection Regulation (GDPR) and the EU Charter of Fundamental Rights. 

The growing use of sensors, biometrics, and AI at EU borders raises major rights concerns—privacy violations, discrimination from misidentification, and weakened asylum safeguards. Recent litigation before the CJEU and ECtHR, along with European Data Protection Supervisor interventions, highlights growing scrutiny of Frontex, eu-LISA, and private vendors over opaque data practices and accountability gaps.  

Any comments about this post? Get in touch with us! Send us an email, or find us on LinkedIn and BlueSky.