Christina Catenacci, BA, LLB, LLM, PhD
On November 14, 2023, Bill 149, Working for Workers Four Act, 2023, received first reading in the Ontario legislature, and on November 23, 2023, it received second reading and was ordered to the Standing Committee on Social Policy. Consultations have commenced and interested parties may make their submissions by February 1, 2024, by visiting here.
This bill deals with several topics; the focus of this article is on Part III.1 (beginning with section 8.1) dealing with the use of artificial intelligence and publicly advertised job postings. Previously, I wrote about changes to the Ontario Employment Standards Act, namely the changes regarding electronic monitoring of employees.
More changes are now being proposed and debated.
Firstly, the bill sets out definitions of “artificial intelligence” and “publicly advertised job posting;” however, there is no substance to these definitions because the definitions simply state, “has the meaning set out in the regulations.”
Secondly, section 8.4 states that employers who advertise a publicly advertised job posting and who use artificial intelligence to screen, assess, or select applicants for a position must include in the posting a statement disclosing their use of artificial intelligence.
As with the proposed definitions, this proposed provision is somewhat mysterious since we do not know what the statement should look like—there are no examples. I wonder:
Essentially, how much detail should employers be including when they craft their statements?
Subsection 8.4(2) states that this requirement to make a statement is not required where the job posting meets certain criteria. But we do not know what these exceptions are because they would be prescribed at some point in the future (regulations at a later time).
What the foregoing suggests is that the proposed provisions in Bill 149 are not very helpful. Let us continue.
As I carefully explained in my doctoral dissertation, the reason that there is concern about employee surveillance, facial recognition, analytics, and other uses of automated decision making/AI tools in employment is because bias that is already built into an employer’s workplace and systems has a potential to be perpetuated, either intentionally or unintentionally.
For instance, if only women have been hired by the employer for a certain position in the past, there is a high likelihood that only women will be hired when AI tools are used going forward, since AI tools would simply be learning from the previous examples and selecting similar types of employees for the job.
This is why employers need to do more than simply notifying employees about the use of complex AI—for the sake of fairness, employers need to explain what they are doing as clearly as possible using Plain English so that employees understand how they are being evaluated and selected. It would be a shame if employers inadvertently screened out great candidates, and the last thing that employers want to appear to be is underinclusive or favouring certain kinds of applicants during their selection process.
As can be seen from the above discussion, these proposed provisions are cryptic at best. It is challenging to know what was intended by the drafters since the definitions and exceptions are missing and an explanation of the required statement is nowhere to be found.
The worry is that employers will look at this, scratch their head, and ask, “Does this mean that we should make a bare bones statement in public job ads such as, “We use artificial intelligence when hiring”?”
Part III.1 appears to be a skeleton, similar to what is set out in Bill C-27’s Artificial Intelligence and Data Act (AIDA). In AIDA, there is no doubt that there are missing definitions. And in this case with Ontario’s Bill 149, there is no question that additional clarity is required.
I now turn to an examination of what other jurisdictions have done to create a comprehensive and effective law that deals with hiring and AI, hoping that we can learn from a comparative analysis.
Let us examine New York’s hiring law that has to do with AI and automated decision tools. As stipulated by the New York City Department of Consumer and Worker Protection, employers now need to change how they use AI tools when recruiting and hiring employees. That is, employers who use AI and other machine learning technology in New York City must:
Furthermore, employees can make a complaint if they were not provided with adequate notices or were not able to access the results of the bias audit.
Employers should note that in New York City, failure to comply with the rules can lead to fines up to $1,500 per instance.
As can be seen, the above rules that are set out are considerably more substantive and helpful when it comes to providing the necessary guidance for employers. The New York City’s rules also contain detailed definitions (for example, bias audit, screening and selecting candidates, and the AI technology at issue). Perhaps Ontario can borrow some of the main components of New York City’s rules.
Employees can be dismissed for cause, and therefore without notice or severance, when their misconduct or performance is so egregious that the employment relationship has been irreparably harmed. In assessing this issue, employers must adopt a contextual approach, which considers not only the misconduct in question, but the entirety of the employment relationship.
Rudner Law, Employment / HR Law & Mediation
I’ve discussed workplace gossip here before, and what bosses can do to prevent it or at least reduce the potential harm, but there are a couple of hyper-modern developments that I didn’t get into: reality television and the Internet. These two things have created a culture of “sharing”, for lack of a better word, that encourages people at play or work to divulge the most mundane and private details of their lives to others—the kind of information that one previously might only have shared with family or best friends.
Adam Gorley
I’ve discussed the Privacy by Design principle before, in the Inside Internal Control newsletter. In case you don’t know, PbD is an approach developed by Dr. Ann Cavoukian, the Privacy Commissioner of Ontario, which proactively embeds privacy protection by default in the design of an organization’s practices and products.
Colin Braithwaite