Towards “smart” …but hybrid borders
In response to criticism over the opacity of algorithmic decisions, the latest generations of filtering and assessment tools increasingly rely on Explainable AI (XAI). The latter does indeed provide border control agents with justified reasoning in understandable language for each decision or alert (for example, a percentage of anomalies detected, with an explanation).
Explainable AI thus aims to improve the accuracy, fairness, transparency, and reliability of AI-based decisions, and refers to a set of methods that enable human users to understand and trust the decisions generated by machine learning algorithms. In other words, it seeks to describe how an AI model functions, its expected impact, and its potential biases, which in turn has the following effects:
- building trust in the use of AI models in operational settings,
- improving the understanding of AI decision-making processes,
- ensuring monitoring and accountability of the models,
- avoiding blind reliance on complex systems often perceived as “black boxes,” especially those using deep learning.
It is worth recalling that deep learning enables object detection (people, vehicles, drones, etc.), action and behavior recognition in video, as well as more advanced contextual analysis, even extending to emotion or posture recognition.
AI decisions must be explainable or justifiable so that citizens and institutions can understand how they are made. XAI therefore relies on specific techniques that allow each decision made by a given model to be traced and explained. The main objectives of this approach are:
- prediction accuracy, by comparing the results generated by XAI with the training data,
- traceability, by restricting decisions to defined and understandable rules,
- understanding decisions, by emphasizing team training: staff will have greater confidence in the process if they understand the reasoning that led to AI-induced results.
Strengthening confidence in these systems also requires, according to experts, maintaining their hybrid nature—that is, continuous human oversight, the development of multidisciplinary teams, and the implementation of governance mechanisms and regular algorithm audits.
While the regulatory framework is gradually taking shape—the European Union’s AI Act categorizes internal security as a high-risk domain and therefore imposes strict requirements—, it still struggles to fully regulate certain uses of AI in public spaces, facial recognition being a prime example.
This is why the future appears to favor hybrid AI in the service of security in general, and border control in particular. With such limits, AI does not replace humans but assists them effectively by reducing operators’ cognitive load. The latter enables early and systematic anomaly detection, and supports decision-making without granting full autonomy to machines.
Lessons learned already show that AI can significantly enhance staff responsiveness without bypassing the human decision-making chain.
With legal, ethical, and technical safeguards in place to ensure that security does not come at the expense of freedom, AI—whether emotional, explainable, hybrid, or otherwise—represents an unprecedented opportunity to enhance the modernization of border control and strengthen border security, with technology acting as a catalyst for the ongoing evolution of processes and migration policies.
(By Murielle Delaporte)