Across the globe, states are embracing artificial intelligence and surveillance technologies to manage borders, process migration applications, and monitor mobile populations. These tools are often marketed as solutions: efficient, objective, and immune to human bias. In Canada, the UNHCR, and other institutional contexts, AI promises to deliver smarter migration governance with faster decisions, better data, and fewer errors.
But behind the promise of innovation lies a more troubling reality. The digitization of migration governance is not neutral. AI and algorithmic technologies are being deployed within systems already shaped by structural exclusion. The result is a new frontier of control where black-box systems replace legal reasoning, and where migrants are datafied, risk-assessed, and filtered before they are ever seen as rights-holders.
Built on datasets that reflect past discrimination, coded by actors who lack racial literacy, and deployed in legal systems with limited transparency or appeal mechanisms, AI in migration replicates the very biases it claims to solve. In the absence of strong safeguards, it has become a tool of algorithmic border control that reinforces racialized exclusions under the guise of efficiency.
Canada and the UNHCR Case Studies
AI-driven systems in migration governance are not abstract theories — they are already operating within Canadian borders and international humanitarian infrastructures. From visa processing algorithms to biometric tracking tools, these systems reshape how migrants are seen, assessed, and sorted. The case studies of Canada’s Chinook system and the UNHCR’s Project Jetson illustrate how emerging technologies are being deployed not just to manage migration, but to discipline and pre-empt it — often in ways that erode transparency, accountability, and due process.
Canada’s Chinook system, used by Immigration, Refugees and Citizenship Canada, is a Microsoft Excel-based software that streamlines the processing of temporary resident visa applications, often without applicants even knowing they are subjected to an algorithm. The system organizes visa applications into spreadsheets and facilitates mass refusals based on standardized filters, raising serious concerns about procedural fairness.
Though the government insists Chinook is not automated decision-making, the reality is more nuanced. Officers use automated sorting and summarization features, eliminating individual consideration for many applications. In practice, it enables bulk refusals without detailed reasoning and disproportionately affects applicants from countries with higher refusal rates, primarily in the Global South.
The Federal Court has increasingly scrutinized decisions made using Chinook, noting the lack of personalized analysis. In Haghshenas v. Canada (Citizenship and Immigration), 2023 FC 464, the Court criticized the generic refusal letter generated through Chinook and emphasized that decisions must reflect individualized assessments and intelligible justification. Yet Chinook continues to be deployed under the banner of efficiency, in contexts where the stakes are anything but routine.
While Canada adopts AI for visa processing, international agencies are applying similar tools to refugee management and surveillance. The UNHCR’s Project Jetson uses predictive analytics to monitor population movements in the East and Horn of Africa. It draws on sources such as social media, mobile phone activity, and satellite imagery to forecast where displacement may occur, ostensibly to improve humanitarian response.
But Jetson’s predictive model, like all AI, is built on assumptions about which movements matter, who is considered a risk, and how migration should be managed. These tools are rarely designed with input from affected communities, and they often map mobility through a security lens, reducing refugees to data points in a system of geopolitical risk management.
The ethical and legal issues are substantial. Who consents to having their movements tracked by satellite or inferred from metadata? What rights do refugees have if they are classified as high-risk by a predictive model? How can affected populations contest the decisions made by or informed by such technologies? These questions remain largely unanswered, in part because no legal regime currently governs the use of AI in humanitarian or migration contexts.
Both Chinook and Jetson reflect a broader trend: experimental technology is being trialed on migrants, often in legal grey zones. These populations are seen as low risk for political blowback, particularly when they are non-citizens with few procedural protections. The result is a sandbox effect, where tech developers and policymakers treat migration management as a proving ground for AI systems.
This is not an accident. It is a structural feature of how digital governance is emerging in the migration space. Under the guise of humanitarian efficiency or administrative streamlining, states and international organizations are embedding algorithmic decision-making deep into systems that lack oversight, transparency, or avenues for redress.
Algorithmic Discrimination and the Mirage of Objectivity
One of the most seductive myths about artificial intelligence is that it is objective. Because algorithms rely on data, not emotions or intuition, they are often assumed to produce fairer, more consistent outcomes than humans. But in migration governance, this logic collapses under scrutiny. AI systems used to manage migrants, refugees, and asylum seekers are not neutral instruments. They are shaped by data that reflects past discrimination, coded by institutions that lack racial literacy, and deployed in legal contexts that provide minimal procedural safeguards.
AI systems are trained on historical data, and that data often mirrors the racial, gendered, and geopolitical hierarchies of the past. In migration, this includes decades of decision-making practices that disproportionately refused applicants from Africa, the Middle East, and parts of Asia. When algorithms are trained on this data, they learn to replicate the same patterns, just faster and with less transparency.
This is not hypothetical. In Khosravi v. Canada (Citizenship and Immigration), 2022 FC 1437, the Court raised concerns about the use of boilerplate reasoning and batch processing tools that failed to engage with evidence in a meaningful way. These decisions, rendered through systems like Chinook, often reflect predetermined risk profiles based on nationality, age, or travel history, all of which serve as proxies for race and class.
A central legal problem with AI in migration is explainability. Many algorithmic tools operate as black boxes, where the rationale behind their decisions is obscured or untraceable, even to the officers who rely on them. This creates serious barriers for judicial review, appeal, or even basic accountability. If an applicant is refused because they triggered a risk flag, how can they challenge that flag if they don’t know what it was or how it was generated?
In administrative law, decision-makers are required to provide sufficient reasons to allow for meaningful review. This standard, reaffirmed in Vavilov, is often violated when automated decision tools are used. The consequence is not just bad law; it is a denial of procedural fairness and a threat to the rule of law itself.
When algorithmic systems guide migration decisions, they do more than reflect bias; they can entrench and amplify it. This creates what scholars call a feedback loop of exclusion. Past refusals shape the data. That data trains the model. The model predicts high risk. The system refuses more applications from the same demographic. The cycle continues.
This loop is particularly dangerous in immigration law, where applicants rarely receive full reasons and where judicial review is limited by cost, geography, and the restrictive leave requirement. It means entire populations can be digitally profiled as inadmissible or undesirable, with little opportunity to contest the label.
Underlying these dynamics is a core truth: in migration governance, efficiency often trumps justice. AI is deployed not because it is fairer, but because it is faster. In this framework, the goal is not individualized justice; it is throughput.
A Critical Framework
To understand how artificial intelligence reshapes migration governance, it is not enough to critique biased data or black-box systems. We must also examine the underlying power structures that determine how these technologies are built, deployed, and justified. Migration AI operates not in a vacuum but within a long-standing architecture of exclusion.
By bringing together insights from Critical Race Theory, Third World Approaches to International Law, and Critical Technology Studies, we can better understand that algorithmic decision-making in migration is not a technological glitch. It is a predictable outcome of systems designed to enforce racialized border control.
Critical Race Theory teaches us that law is not neutral; it plays a central role in producing and maintaining racial hierarchies. In the context of migration law, borders have long functioned as racial filters, determining who is permitted to enter, who is excluded, and under what conditions. This pattern is evident across history, from the Chinese Head Tax and the preferred nations doctrine to contemporary visa regimes that disproportionately refuse applicants from the Global South.
Third World Approaches to International Law scholars offer a vital perspective, reminding us that migration control is also a form of postcolonial governance. The borders of the Global North are maintained not only through visa regimes but also through technological outsourcing: predictive surveillance, biometric databases, and risk-scoring systems exported to or tested in the Global South.
Project Jetson exemplifies this dynamic. Yet the communities being surveilled had no meaningful input into how the system was developed, how the data would be used, or what the consequences would be. This is tech colonialism in practice. It involves using the bodies and movements of racialized populations to refine tools of control, without democratic legitimacy or accountability.
Similarly, when Canada deploys AI tools that disproportionately refuse applications from African or South Asian nations, it reproduces a global stratification of mobility while treating the Global South as inherently risky or undesirable.
Many tech reform efforts focus on building better algorithms: more diverse training data, transparency dashboards, fairness audits. While important, these reforms are insufficient if they fail to confront the underlying racial and geopolitical logics that shape how migration law operates.
What’s needed is a race-conscious approach to migration technology. This approach should recognize how AI systems participate in racial ordering, interrogate the assumptions and interests embedded in technical design, demand meaningful consent and participation from affected communities, and ground legal reform in principles of anti-racism, decoloniality, and procedural justice.
Without this shift, migration AI will remain what it increasingly appears to be: a digital border wall, dressed in the language of efficiency and fairness.
Towards Just Tech Governance in Migration
If AI is here to stay in migration governance, and all evidence suggests it is, then the challenge is clear: how do we govern these systems justly? How do we ensure that technology serves as a tool for dignity and rights, not another instrument of control and exclusion? The answer begins with rethinking the legal, ethical, and institutional frameworks that currently surround AI in migration, and demanding race-conscious, rights-based accountability.
Canada, like many jurisdictions, has adopted high-level principles to govern automated decision-making. The Directive on Automated Decision-Making, introduced in 2019, sets out requirements such as algorithmic impact assessments, transparency obligations, the right to explanation, and human-in-the-loop review for higher-risk decisions.
But these safeguards fall short in migration contexts. Systems like Chinook are often classified as low risk, despite impacting the lives and mobility of thousands. Many decisions processed through algorithmic filters are not disclosed to applicants. Applicants have no right to know if automation was involved, let alone challenge the design or data behind it.
Moreover, the Directive only applies to federal departments, not international partners like the UNHCR or private contractors who increasingly build and maintain AI systems. This fragmented governance landscape creates accountability gaps at every level.
Power, Borders, and the Future of Migration Control
Artificial intelligence is not just reshaping how borders are managed. It is reshaping what borders are. As states and international agencies adopt algorithmic tools to decide who may move, who must stay, and who is deemed a risk, migration control becomes increasingly invisible, automated, and insulated from accountability. Technology promises speed, consistency, and objectivity, but it often delivers opacity, exclusion, and impunity.
From Canada’s use of systems like Chinook, to the UNHCR’s Project Jetson, to biometric surveillance infrastructures spreading across the Global South, migration AI embeds old power hierarchies into new digital forms.
What emerges is not just a legal challenge. It is a moral and political one. Will we allow data-driven governance to further dehumanize migrants? Or will we demand technologies that serve justice, human dignity, and democratic accountability?
The path forward requires more than transparency tweaks or algorithmic audits. It requires a race-conscious, rights-based reckoning with the purpose and politics of migration control itself. It demands that we interrogate who builds these systems, who benefits from their efficiencies, and who bears their costs.
Ultimately, the struggle over AI in migration is not just about tools. It is about power, belonging, and the right to move. If we are to build a migration system worthy of those values, we must resist the automation of inequality and insist on a different kind of intelligence—one rooted in law, justice, and care.
Any article or other information or content expressed or made available in this Section is that of the respective author(s) and not of the OBA.