Immigration Decision-Making: Artificial Intelligence May Violate Human Rights

 

The increasing use of automated decision-making Artificial Intelligence tools in immigration may threaten basic human rights.

Imagine applying to the Canadian government for a visa, permission to immigrate, or become a refugee, and being turned down. Then you learn that you were rejected by a machine – and that no human being ever read your application.

Currently, Canada uses automated decision-making systems to divide applications into simple cases and complex cases. Human officials review the complex cases and make the decision themselves.

Canada will soon expand the use of these systems to make decisions in some complex cases. This may lead to decisions about immigration or visa applications that discriminate against migrants, residents without citizenship status and other vulnerable people. Immigration lawyers and others are very concerned about this.

Expanding The Scope Of AI’s Immigration Decision-Making

The government is now looking for a contractor to help immigration officials use algorithms and data-mining to

· Assess the risks of sending a failed refugee claimant back home and

· Calculate if a migrant is well-established enough in Canada to be allowed to stay here on humanitarian grounds

The new automated decision-makers will make these decisions after they determine if

· an application is complete

· a marriage is genuine, and

· whether an applicant should be identified as a ‘risk’.

Human officials will be much less involved in making these decisions.

Why Artificial Intelligence Does Not Make Neutral Immigration Decisions

A new University of Toronto study raises concerns about the effect of this new policy on human rights. The report’s authors are very concerned about using algorithms to make these decisions.

Petra Molnar, a co-author, says that ‘algorithms are by no means neutral. It’s a set of instructions based on previous data analyses that you use to teach the machine to make a decision. (The machine) doesn’t think or understand the decision it makes.”

Molnar explains that “[b]iases of the individuals designing an automated system or selecting the data that trains it can result in discriminatory outcomes that are difficult to challenge because they are opaque.”

Artificial Intelligence’s immigration decisions could harm applicants’ human rights Bots at the Gate presents an example of an algorithm used by some American courts. The algorithm assesses the risk of a person reoffending when a judge is deciding whether to order pretrial detention. Using the algorithm meant it was more likely that racialized and vulnerable people would be held behind bars than white offenders.

Other AI decision-makers rely on stereotypical factors – such as appearance, religion or travel patterns – and may often ignore more relevant data when making decisions. This imbeds bias into the automated decision-maker.

Migrants, residents without citizenship status and other vulnerable people may have their fates decided without a human official ever seeing their case. They probably won’t be able to appeal the AI’s decision as they have little access to legal assistance to protect their rights to privacy, due process, and freedom from discrimination.

Technosolutionism – No Solution To The Problems With AI Decision-Making

Cynthia Khoo of the University’s Citizen Lab also contributed to the report. Khoo is concerned about governments rushing to use artificial intelligence decision-making tools. Khoo calls this trend ‘technosolutionism’ as it assumes that technology is free of human frailties and mistakes.

“The problem is that technology – which is made and designed by humans and trained on human decisions and human-produced data – comes with these exact same problems. (But) with the additional problem that not everyone realizes these problems remain.”

Ensuring Artificial Intelligence Decisions Don’t Unintentionally Discriminate

Bots at the Gate proposes that

· The Canadian Government create an arms-length oversight body to review all current and proposed government uses of automated decision-making systems

· Create a task force made up of government, civil society, and academics to analyze the impact of AI decision-making on human rights and society

The consequences of automated decision-making are serious. If you find yourself in this situation, you should contact an experienced immigration lawyer for assistance. Their knowledge of the law and of the immigration system will be essential if you hope to successfully challenge an automated decision-maker’s decision in your case.