Bill 149: a focus on hiring employees and employers’ use of AI
On November 14, 2023, Bill 149, Working for Workers Four Act, 2023, received first reading in the Ontario legislature, and on November 23, 2023, it received second reading and was ordered to the Standing Committee on Social Policy. Consultations have commenced and interested parties may make their submissions by February 1, 2024, by visiting here.
This bill deals with several topics; the focus of this article is on Part III.1 (beginning with section 8.1) dealing with the use of artificial intelligence and publicly advertised job postings. Previously, I wrote about changes to the Ontario Employment Standards Act, namely the changes regarding electronic monitoring of employees.
More changes are now being proposed and debated.
What are the proposed changes?
Firstly, the bill sets out definitions of “artificial intelligence” and “publicly advertised job posting;” however, there is no substance to these definitions because the definitions simply state, “has the meaning set out in the regulations.”
Secondly, section 8.4 states that employers who advertise a publicly advertised job posting and who use artificial intelligence to screen, assess, or select applicants for a position must include in the posting a statement disclosing their use of artificial intelligence.
As with the proposed definitions, this proposed provision is somewhat mysterious since we do not know what the statement should look like—there are no examples. I wonder:
- What are employers supposed to say that they are using or doing in their statements (given the obscure definitions)?
- How is to “screen, assess, or select applicants for a position” defined?
- Are there explicit prohibitions against employers profiling and discriminating against employees based on certain data sources or vehicles such as employee surveillance?
- Do employers need to publish their statement on their website and explain how their decisions are made when they screen, assess, or select applicants for a position, or is a simple statement on sites such as LinkedIn or Indeed sufficient?
- Do employers need to get an independent auditor to conduct a bias audit and share the results?
- What can employees do if they disagree with an employer’s decision?
- Do employers need to provide a process or contact details in their statement for applicants who do not agree with an employer’s decision using AI?
- Do employers need to provide a link to the Ontario Human Rights Commission and Tribunal for further information or in case employees want to make a human rights complaint under the Human Rights Code?
Essentially, how much detail should employers be including when they craft their statements?
Subsection 8.4(2) states that this requirement to make a statement is not required where the job posting meets certain criteria. But we do not know what these exceptions are because they would be prescribed at some point in the future (regulations at a later time).
What the foregoing suggests is that the proposed provisions in Bill 149 are not very helpful. Let us continue.
Why is this important?
As I carefully explained in my doctoral dissertation, the reason that there is concern about employee surveillance, facial recognition, analytics, and other uses of automated decision making/AI tools in employment is because bias that is already built into an employer’s workplace and systems has a potential to be perpetuated, either intentionally or unintentionally.
For instance, if only women have been hired by the employer for a certain position in the past, there is a high likelihood that only women will be hired when AI tools are used going forward, since AI tools would simply be learning from the previous examples and selecting similar types of employees for the job.
This is why employers need to do more than simply notifying employees about the use of complex AI—for the sake of fairness, employers need to explain what they are doing as clearly as possible using Plain English so that employees understand how they are being evaluated and selected. It would be a shame if employers inadvertently screened out great candidates, and the last thing that employers want to appear to be is underinclusive or favouring certain kinds of applicants during their selection process.
Analysis
As can be seen from the above discussion, these proposed provisions are cryptic at best. It is challenging to know what was intended by the drafters since the definitions and exceptions are missing and an explanation of the required statement is nowhere to be found.
The worry is that employers will look at this, scratch their head, and ask, “Does this mean that we should make a bare bones statement in public job ads such as, “We use artificial intelligence when hiring”?”
Part III.1 appears to be a skeleton, similar to what is set out in Bill C-27’s Artificial Intelligence and Data Act (AIDA). In AIDA, there is no doubt that there are missing definitions. And in this case with Ontario’s Bill 149, there is no question that additional clarity is required.
I now turn to an examination of what other jurisdictions have done to create a comprehensive and effective law that deals with hiring and AI, hoping that we can learn from a comparative analysis.
What can Ontario learn from New York City?
Let us examine New York’s hiring law that has to do with AI and automated decision tools. As stipulated by the New York City Department of Consumer and Worker Protection, employers now need to change how they use AI tools when recruiting and hiring employees. That is, employers who use AI and other machine learning technology in New York City must:
- Conduct a bias audit before using the tool: the bias audit must be an impartial evaluation done by an independent auditor, and bias can include intentional and unintentional bias
- Post a summary of the results of the bias audit on their website: this must be done in a clear and conspicuous manner
- Notify job candidates and employees that the tool will be used to assess them, and include instructions for requesting accommodations: candidates and employees are included because the term, “employment decision” includes both hiring and promoting. At least 10 days prior to use, employers must disclose to each candidate or employee in New York City that an automated decision tool will be used in connection with the assessment or evaluation of the candidate or employee. Likewise, employers must articulate the job qualifications and characteristics that the tool will use in the assessment of the candidate or employee
- Post on the employer’s website a notice about the type and source of data that is used for the tool and the employer’s data retention policy: employers and employment agencies must provide the information and post instructions on the employment section of its website regarding how to make a written request for this information (and respond to requests within 30 days)
Furthermore, employees can make a complaint if they were not provided with adequate notices or were not able to access the results of the bias audit.
Employers should note that in New York City, failure to comply with the rules can lead to fines up to $1,500 per instance.
As can be seen, the above rules that are set out are considerably more substantive and helpful when it comes to providing the necessary guidance for employers. The New York City’s rules also contain detailed definitions (for example, bias audit, screening and selecting candidates, and the AI technology at issue). Perhaps Ontario can borrow some of the main components of New York City’s rules.