AI for border control – a ‘geopolitical innovation race’ at the EU’s external borders
Posted:
Time to read:
Guest post by Stefka Schmid and Julia Mahlberg. Stefka is a postdoctoral researcher at the chair Science and Technology for Peace and Security (PEASEC) at the Technical University of Darmstadt, Germany. Her research interests include visions of human-AI interaction and innovation policies in the context of security governance. Julia Mahlberg is a student assistant at PEASEC and enrolled in the international studies/peace and conflict studies master program in Darmstadt and Frankfurt. She is interested in migration politics as well as the use of technology in crises.
This post is part of a collaboration between Border Criminologies and Geopolitics that seeks to promote open access platforms. The full article is free to download.
State actors envision artificial intelligence (AI) as beneficial and transformative, including in security contexts such as the military or border control. For example, initiated in 2024, the Frontex-supported Horizon project BorderForce aims at ‘automated border surveillance’, including ‘AI-enhanced object detection’ to provide ‘risk indicators’ to the ‘end-user’. Yet, while the EU AI Act has proposed regulation of commercial AI applications, it falls short in regulating AI for the purpose of ‘border security’ as it is unclear which applications are categorised as so-called ‘high-risk AI’. Considering the EU branding of ‘trustworthy AI’, which emphasises civil rights, it becomes apparent that regulation predominantly addresses ‘European’ contexts. AI relying on data of non-European ‘others’ is rather unregulated and follows – as part of a ‘geopolitical innovation race’ – economic and security logics. Together with Daniel Lambach, Carlo Diehl and Christian Reuter, Stefka published an article on the ‘geopolitical innovation race’ of AI. The proposed concept aims to capture the global dynamics of AI innovation policies. In our work, we conducted an analysis ‘from afar’ and studied policy documents targeting AI R&D by the Peoples’ Republic of China, the European Union (EU), and the United States.
Our main argument builds on the idea that these three different technopoles are all part of and, together, construct the global economic and security environment. We find that these spaces of innovation and collaboration among state, economic, and academic actors, engage in processes of bordering through enacting geopolitical AI innovation policies. Through them, spaces of innovation are (re)territorialised. Through an interdisciplinary focus on ‘safety-critical scenarios’ in human-computer interaction, this raised our attention towards the role AI plays in conducting ‘boundary work’ in high-stake border control settings. Further, harmful politics at EU (especially external) borders underscored the need to shift our research focus.
Thus, we started to think about how such a ‘geopolitical innovation race’ is played out at a technopole’s border. To illustrate these dynamics, we look at EU border control initiatives with a focus on AI. The geopolitical innovation race is structured around four key dimensions: (1) pay-off structure, (2) actor networks, (3) motivation, and (4) social construction of technology that differentiate it from an ‘arms race’ and ‘innovation race’. The term ‘geopolitical innovation race’ tries to capture current dynamics of tech governance. Border control is neither the most prominent nor publicly discussed field of AI application, but we still find that related EU politics reflect notions of a ‘geopolitical innovation race’. From a technocratic, problem-solving perspective, border control represents just another ‘context of use’ of AI, neglecting that involved humans face impactful consequences of technology use. Characteristics of a geopolitical innovation race are notable in innovation projects for border control, illustrating a similar mode of politics across application contexts but with less ethical obligations to implement ‘trustworthy AI’ compared to commercial products.
The geopolitical innovation race is well-illustrated by EU border control initiatives, such as the BorderForce project and the CRiTERIA project. BorderForce is an EU-funded project intended to promote the adoption and use of AI and publicly accessible data, i.e., OSINT. As noted, these are supposed to be linked with existing surveillance systems such as mobile stations, surveillance towers, drone defence functions, satellite connections, unmanned aerial vehicles, and autonomous sensors. Following an iterative design approach, technology use in ‘typical’ scenarios at the borders is tested in a first step and to be implemented after evaluation to enhance ‘real-time surveillance capabilities’ and enable threat assessment.
The CRiTERIA project focuses on the development and implementation of AI-based analysis of publicly accessible information to improve indicators for risk assessment in the context of crossings of the EU’s external borders. Thereby, it follows a multimodal and social network analytical approach.
First, as the race surrounding AI innovation is geopolitical, collaboration for technology adoption in border control mainly takes place within the EU. Still, considering HEROES, a project to counter human trafficking crimes and child sexual exploitation, there is global cooperation with partners from Asia and South America. In this context, a ‘fake job offer classifier’ has been proposed as a technological solution as well as detection methods for analysis of (visual) criminal material. While cooperation among technopoles might be rare, this project shows that, contrary to an AI ‘arms race’, positive-sum scenarios are identified when there is a shared goal, with the European Commission stating that ‘HEROES is making the world safer for children and others at risk of human trafficking.’
Second, border control projects are carried out by actor networks. The European technopole builds on collaboration among, e.g., the Estonian or Swedish (border) police, universities, e.g., Greek and Maltese, as well as companies, such as Hensoldt Analytics. In the context of Horizon projects, collaboration between governmental actors, agencies, industry, and academia becomes possible. These networks are less tightly coupled in ‘national silos’ than in a traditional ‘arms race’. However, in contrast to an innovation race guided by economic logic, neither project BorderForce nor CRiTERIA rely on transnational or global partners located outside the European Union, emphasising the geopolitical character of border politics.
Third, we find different stated motivations along the prominent aim of improving European border security. While this overarching aim reflects the geopolitical notion of how innovation is applied at the border, it is formulated in reference to diverse threats ‘from increased waves of illegal migration to human trafficking, document fraud, terrorism, smuggling, and public health threats’. Further, motivations of project BorderForce include human rights, pointing to envisioned societal benefits of technological innovation. Besides the goal of ‘regional stability’, the project is set out to ensure ‘seamless operations in monitoring the flow of goods, people, and information’, indicating economic motivations among security considerations.
Fourth, these efforts indicate how AI is socially constructed. When applied to border control, AI is understood as a technological fix where ‘traditional methods are insufficient’. AI is envisioned to be implemented relying on robotics, sensors and OSINT data and as ethically aligned with the European approach to trustworthy AI, based on human-centered design and considering a ‘wider perspective of risks’. Similar to other contexts of application and use, AI is, given relevant social media data can be used, enthusiastically understood to unlock potentials for border security.
This is an illustration of how liberal democratic governmental actors aim to adopt AI for governance of borders. In contrast to the EU’s proclaimed ethical approach to AI that is primarily directed at domestic civilian use, AI applications for border control are less clearly regulated. Given a geopolitical and economically competitive global context, civilian AI applications need to be ‘trustworthy’ for the EU to contribute to the race. While AI implementation at the margins of the European technopole is presented as another context of use in which human-centered design should be realised, public trust necessary for broad AI uptake does not depend on it. In line with critical perspectives of international relations and human-computer interaction, AI for border control needs to be contextualised. This can be done by investigating ‘less successful’ and non-state actors’ visions and use of AI as well as expanding individualist design approaches of operator-AI interaction by including implications for targeted individuals.
Any comments about this post? Get in touch with us! Send us an email, or find us on LinkedIn and BlueSky.
How to cite this blog post (Harvard style):
S. Schmid and J. Mahlberg. (2025) AI for border control – a ‘geopolitical innovation race’ at the EU’s external borders . Available at:https://blogs.law.ox.ac.uk/border-criminologies-blog/blog-post/2025/07/ai-border-control-geopolitical-innovation-race-eus. Accessed on: 05/12/2025Share: