Artificial intelligence and Canada’s immigration system

Thursday 20 April 2023

Sergio R Karas
Karas Immigration Law Professional Corporation, Toronto
karas@karas.ca

Reeva Goel
Karas Immigration Law Professional Corporation, Toronto
goelreeva@gmail.com

Introduction

Artificial intelligence (AI) is increasingly being used in many industries and sectors to streamline processes, make decision-making more efficient and improve overall productivity. Canadian immigration is no exception. This article focuses on how AI has begun to play a significant role in Canada’s immigration system, the ethical considerations for automated decision-making (ADM) and the evolution of AI regulations.

AI’s role in Canadian immigration

The Immigration and Refugee Protection Act1 (IPRA) was amended in 2017, to include the following section on electronic administration:

Decision, determination or examination by an automated system – Section 186.1(5) ‘For greater certainty, an electronic system, including an automated system, may be used by the Minister to make a decision or determination under this Act, or by an officer to make a decision or determination or to proceed with an examination under this Act if the system is made available to the officer by the Minister.’

Requirement to use electronic means – Section 186.3(2) ‘The regulations may require a foreign national or another individual who, or entity that, makes an application, request, or claim, submits any document, or provides information under this Act to do so using electronic means, including an electronic system. The regulations may also include provisions respecting those means, including that system, respecting the circumstances in which that application, request or claim may be made, the document may be submitted, or the information may be provided by other means and respecting those other means.’

The IRPA now requires that the applications for many visa categories are completed in electronic form. It also amended the legislation to authorise the use of electronic and automated systems by immigration officers to make decisions.

The Government of Canada’s department for immigration, namely Immigration, Refugee and Citizenship Canada (IRCC), is increasing the automation of its services due to the growth in temporary resident applications, including study permits, work permits and temporary resident visas (TRVs). The intention of using tools like Chinook and Advanced Data Analytics (ADA) as automated decision systems in Canada, is to improve administrative decision-making processes, assist or replace personnel, increase efficiency and reduce the processing time for applications.

Chinook is a Microsoft Excel-based tool developed in 2018 by the IRCC to assist with applications. It displays information stored in the IRCC’s processing system and system of record, increasing the user productivity of the global case management system.Chinook is designed to simplify a client’s information, and reduces the amount of time spent uploading and reviewing information.The IRCC claims that Chinook does not assess or make decisions on applications.

ADA is used to assist IRCC officers sort and process all TRV applications submitted from outside Canada. The purpose of using ADA is to find new ways for the IRCC to improve its client service and processes, and to assist in managing the increasing volume of TRVs. ADA is used to identify routine applications to streamline processing (for clients who have been previously approved to visit Canada in the past ten years), create efficiencies by sorting and triaging non-routine applications and more.4 The IRCC states that only an officer can refuse an application and the system never refuses or recommends refusing applications. ADA has been used since 2018 to help sort more than one million applications, resulting in applications being assessed 87 per cent faster.5

Ethical considerations for ADM

AI can improve the integrity of the Canadian immigration system. AI algorithms can be used to detect fraudulent applications and identify potential security threats. The technology can analyse an applicant’s social media presence to detect any red flags or warning signs, such as evidence of terrorism or criminal activity. This helps to ensure that only those who are truly eligible are granted entry to Canada.

AI algorithms can analyse a range of factors, including an applicant’s work history, education and other relevant information, to help determine their eligibility for a particular visa category. The IRCC claims that this allows immigration officials to make more informed and objective decisions, reducing the risk of discrimination or bias. Using AI algorithms and machine learning models can help to quickly identify potential issues or inconsistencies in applications, allowing immigration officers to focus their attention on the most complex or high-risk cases. The IRCC asserts that AI will speed up the processing time for applications and reduce the backlog of cases waiting to be reviewed, especially those that are considered routine.

However, there are numerous concerns about the use of AI in Canadian immigration. Some critics argue that AI algorithms and machine learning models may perpetuate existing biases or stereotypes. If an AI model is trained on historical data that includes discrimination or bias, it may continue to make decisions that reflect these biases, even if they are unintentional. The IRCC claims that if there is a problem in an application that is reviewed by the AI, it will always then go to an officer for determination. However, this can lead to bias by the officer, as the officer may believe that there is a certain reason for the AI to find the application problematic. The scales will always be tipped in favour of the AI’s ‘determination’.

There are also concerns about the transparency and accountability of AI. Currently, it can be difficult for individuals to understand how AI systems are making decisions, and to challenge these decisions if they believe they are incorrect. For instance, in relation to Chinook, there are constant concerns that not enough information is available to the public to specifically understand how the technology operates. This raises questions about the reliability and accuracy of AI, and whether individuals have the right to challenge or contest decisions made by AI systems.

There is also the potential for privacy violations. If AI algorithms are used to analyse an applicant’s social media presence, they may gather sensitive information that could be used against the applicant. Additionally, there are concerns about the security of the data collected by AI systems, and the potential for this data to be used for malicious purposes. As AI systems become more sophisticated, they could be used to launch cyberattacks or steal personal information, such as passwords or financial data. Strong security measures need to be in place to protect personal data, which should be audited by third parties.

Human rights violations may occur when institutions rely on AI for administrative decision-making. Institutions relying on AI must ensure that an individual’s rights guaranteed under the Canadian Charter of Rights and Freedoms6 (the ‘Charter’) are not infringed. Section 7 of the Charter provides that everyone has the right to life, liberty and security of the person. Section 7 serves as a source of constitutional protection of the right to privacy if information about an individual is kept for too long. It is not guaranteed that the information kept by Chinook and ADA will remain secure.

Finally, there is a concern about the use of AI for mass surveillance. AI systems can be used to gather large amounts of personal data and to analyse it to identify individuals and track their movements. In an extreme example, the Chinese government has implemented a comprehensive surveillance system that leverages AI technology to monitor the behaviour and activities of its citizens. The AI surveillance system in China is comprised of various technologies, including facial recognition, big data analytics and cloud computing. It collects data from a wide range of sources and uses this data to build profiles on individuals. The AI algorithms then analyse this data to identify patterns of behaviour and predict threats to the government. This must never be allowed to happen in Canada.

AI regulations

The Personal Information Protection and Electronic Documents Act (PIPEDA)7 was enacted in 2001 and is the foundation of private sector privacy protection at the federal level in Canada. PIPEDA was enacted before the emergence of AI and requires significant legislative changes to address AI developments. In 2020, the Office of the Privacy Commissioner (OPC) issued an open consultation to reform PIPEDA, in order to ensure the appropriate regulation of AI. Currently if there is a breach, the OPC only has the authority to resolve complaints through negotiation or other methods and may take the matter to federal court to seek a court order to rectify the situation. Recommendations have been made that the OPC be given the authority to issue binding orders and financial penalties to guarantee compliance and protect human rights.8

The Privacy Act9 (the ‘Act’) was enacted in 1983. It applies to the government’s collection, use, disclosure, retention, or disposal of personal information. Section 2 of the Act states that its purpose is to protect the privacy of an individual’s personal information held by a government institution and provides them with a right of access to their information. The IRCC is subject to the Act and must ensure that it complies with the regulations.

The Canadian government released an online public consultation in 2021 that discussed the future of the Act and the potential for modernising it to ‘enhance Canadians’ trust in how federal public bodies treat, manage and protect their personal information.’10 The government explained the vision to modernise the Act, which is supported by three pillars of respect, adaptability and accountability.11

Based on recommendations, Bill C-27 or the Digital Charter Implementation Act was introduced in 2022 by the federal government that sets out a new privacy legislative framework. It would enact the Consumer Privacy Protection Act (CPPA), the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act (AIDA). The CPPA would repeal some parts of PIPEDA and ‘replace them with a new legislative regime governing the collection, use, and disclosure of personal information for commercial activity in Canada.’12 The purpose of this is to modernise, maintain and extend the existing privacy rules and impose new ones in the private sector.

Similarly, AIDA sets out new measures to regulate international and interprovincial trade and commerce involving AI systems.13 The purpose of AIDA is ‘to establish common requirements for the design, development, and use of [AI] systems, including measures to mitigate risks of harm and biased output. It would prohibit specific practices with data and [AI] systems that may result in serious harm to individuals or their interests.’14 

Conclusion

To address the privacy concerns, it is important to implement clear and effective privacy regulations to ensure that personal data is collected, stored and used in a responsible and ethical manner, and that individuals have control over it. It is also important to ensure that AI systems are transparent and accountable, and that individuals have the right to challenge or contest decisions made by AI.

The authorities must promote transparency and accountability in the development and use of AI. This includes ensuring that AI models are transparent, that applicants understand how decisions are being made and ensure that individuals are protected from malicious or unethical use of AI.

Despite these concerns, the use of AI in Canadian immigration is likely to continue to grow, as it offers many benefits that are too compelling to ignore. The speed and efficiency that AI provides can help to reduce the backlog of cases waiting to be reviewed. The improved decision-making capabilities offered by AI can help to protect the safety and security of Canadian citizens if AI is used in an ethical and responsible manner, with the appropriate oversight by independent watchdogs. Failure to do so may usher in the erosion of civil and constitutionally protected rights.


[1] Immigration and Refugee Protection Act, SC 2001, c 27.

[2] Government of Canada, CIMM – Chinook Development and Implementation in Decision-Making, 15 and 17 February 2022, www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-feb-15-17-2022/chinook-development-implementation-decision-making.html

[3] Ibid.

[4] Government of Canada, Advanced data analytics to help IRCC officers sort and process temporary resident visa applications, www.canada.ca/en/immigration-refugees-citizenship/news/notices/analytics-help-process-trv-applications.html

[5] Ibid.

[6] Personal Information Protection and Electronic Documents Act, SC 2000, c 5.

[7] Hopf/Mayr/Eichinger/Erler, GlBG² (2021) § 9 Rz 5.

[8] Office of the Privacy Commissioner of Canada, Policy Proposals for PIPEDA Reform to Address Artificial Intelligence Report, www.priv.gc.ca/en/about-the-opc/what-we-do/consultations/completed-consultations/consultation-ai/pol-ai_202011/

[9] Privacy Act, RSC, 1985, c P-21.

[10] Government of Canada ‘Respect, Accountability, Adaptability: A discussion paper on the modernization of the Privacy Act’ www.justice.gc.ca/eng/csj-sjc/pa-lprp/dp-dd/raa-rar.html

[11] Ibid.

[12] Government of Canada, Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, www.justice.gc.ca/eng/csj-sjc/pl/charter-charte/c27_1.html

[13] Ibid.

[14] Ibid.