AI’s Threat to Fair Hiring in Canada
This article was written by Peter Liddle, 1L.
One of the first rules of hiring is to avoid “trusting your gut.” Studies show managers who hire based on intuition often pick the people who mirror their traits, an unreliable way to find the perfect candidate.
The introduction of AI to the hiring process appears on the surface to be an antidote to this form of hiring bias. Unlike human hiring managers, AI technology bases decisions exclusively on hard data through the process of machine learning. Although robots are not yet making the final call on hiring decisions, many companies already use them to screen out bad candidates from the hiring pool and some even employ them to predict performance based on a linguistic analysis of a writing sample or speech pattern. But does this type of technology pose an ethical problem of its own?
Ifeoma Ajunwa, an assistant professor of labour law at Cornell University, made a compelling claim earlier this month in the New York Times about the potential threat AI poses to fair hiring (she is also sharing her research at McGill on November 4th). Ajunwa argues AI creates a “closed-loop system” by encouraging an arbitrary pool of applicants to apply, then using that data set to make future decisions about who the “best” candidates are.
AI tools reflect the bias of their creators, the data sets used by the algorithm, and the inherent biases in our society, according to Ignacio Cofone, an assistant professor at the Faculty of Law at McGill. Teaching an algorithm to hire software engineers by feeding it the resumes of a company’s current top performers will simply amplify the bias that the company exhibited in its past hiring decisions. This was what led Amazon to shut down one of its hiring algorithms after it was revealed to have rejected resumes that included female-specific attributes.
It is inevitable that as more complex AI is introduced to the hiring process, we will need more sophisticated regulation to prevent discrimination. This will involve recognizing “proxy” factors that stand in for traditional protected classes. For example, a postal code might be an indirect way of discriminating against a racial minority or working-class applicant. As AI becomes more complex, the number of proxies will become increasingly difficult to monitor and regulate. The solution will likely be tied to our success regulating the data we allow the algorithm to access.
Regulation of discrimination by AI tools in hiring is still at an early stage across the world, although countries, including Canada, are taking steps to build an ethical framework that will form the foundation of future laws. In 2017, Canada invested $125 million into research and development of AI, part of which funds the CIFAR AI and Society Program, which studies the policy implications of AI as it relates to a variety of fields. CIFAR policy recommendations for addressing AI hiring bias include developing guidelines and training for provincial labour boards, requiring greater transparency from companies on their AI-driven hiring methods and building the skill sets that regulators and the public will need to ensure AI leads to fairer hiring practices.
The job market is unfortunately still rife with discrimination, although it is often subtle and at times not even intentional. Now is the time to determine whether AI will be a friend or a foe in the effort to fight for fairer employment.
 Gleb Tsipursky, “Here’s why your gut instinct is wrong at work”, The Conversation (7 March 2017), online: <www.theconversation.com>
 Ben Dattner, Tomas Chamorro-Premuzic, Richard Buchband & Linda Schettler, “The Legal and Ethical Implication of Using AI in Hiring”, Harvard Business Review (25 April 2019), online: <www.hbr.org>
 Ifeoma Ajunwa, “Beware of Automated Hiring”, The New York Times (8 October 2019), online: <www.nytimes.com>
 Ignacio N Cofone, “Algorithmic Discrimination is an Information Problem” (2019) 70 Hastings LJ at 1394.
 See Cofone, supra note 4 at 1408.
 Ibid at 1409.
 Tim Dutton, “An Overview of National AI Strategies”, Medium (28 June 2018), online: <www.medium.com>
 Sarah Villeneuve, Brent Barron & Gaga Boskovic, “Rebooting Regulation: Exploring the Future of AI Policy in Canada”, CIFAR (May 2019), online: <www.cifar.ca>