Navigating the challenges of AI fairness, bias and robustness

Navigating the challenges of AI fairness, bias and robustness

In the past several years, artificial intelligence (“AI”) has exploded into the public consciousness and emerged as a driving economic force that underpins some of the world’s largest companies and most exciting new start-ups. The emergence of AI in widespread commercial applications quickly led AI researchers within academia and industry to realize the potential risks of deploying these algorithms in real-world settings. Alongside the stunning successes of modern AI, concerns around bias, discrimination, and robustness of AI algorithms quickly proliferated.

Machine learning algorithms which performed reliably in testing, sometimes proved to be brittle and lack the level of robustness required when faced with the complexity of deployment in real-world environments. Similarly, as machine learning was increasingly incorporated into systems that made predictions about people, it became clear that AI models had a tendency to replicate, and sometimes amplify, biases reflecting societal prejudices contained in historical datasets used to train the algorithms.[1]

These issues have begun to have real consequences for organizations. For example, in 2023, iTutorGroup agreed to pay $365,000 USD and alter its practices to settle an action brought by the US Equal Employment Opportunity Commission (EEOC) claiming the AI-based hiring software used by iTutorGroup beached anti-discrimination laws due to gender and age discrimination biases incorporated into the algorithm of the software, which resulted in the software automatically rejecting applicants who were women over age 54 and men who were over age 59.[2]

In response to these issues, researchers began to investigate bias, discrimination, and robustness in AI algorithms and to develop techniques for making models fair and robust when deployed in real-world applications.[3] Due to the rapidly evolving nature of the field, much of this body of research only developed in the past two to three years and, as a result, expertise in these areas is not widespread. A brief summary of the history of modern AI and research around bias, discrimination, and robustness can be found in an appendix to this post, below.

Additionally in response to these issues, governments and regulatory authorities are seeing a need to specifically regulate AI to protect people against these risks (notwithstanding the application of existing human rights laws) and, to verifying degrees, have engaged with AI researchers and experts, to craft AI-specific legislation and guidance. AI standards are also being developed to address these risks and are professionalizing the field of AI.

Below we outline how organizations will find it challenging to navigate the legal and ethical risks associated with the development and use of AI without the advice of AI technical and legal experts, which are currently in limited supply, and what steps organizations can take to manage these risks.

The arrival of regulation and standards

Most jurisdictions around the world have begun to formulate regulations for AI, with some of these already in effect.[4]

In the European Union, the [5] came into force on August 1st, 2024, while in Canada the Artificial Intelligence and Data Act (“AIDA”) is currently before parliament as part of Bill C-27, the Digital Charter Implementation Act, 2022.[6] In the United States, some states have passed legislation, such as the Colorado AI Act, but there is currently no national regulatory regime in place. Instead, standards have been developed by the National Institute of Standards and Technology (“NIST”), which is part of the U.S. Department of Commerce. The regulatory efforts and the development of standards attempt to mitigate the risks of harm associated with AI systems while balancing the need to allow technological innovation.

These standards and regulations were not developed in a vacuum; rather, they have attempted to respond to many of the same concerns that researchers have identified, and in doing so have incorporated AI research on fairness, bias, and robustness into the frameworks that will define the legal regimes governing AI.

There are numerous provisions in the EU AI Act and AIDA that draw upon research into fairness and robustness and require organizations that develop or deploy AI systems to mitigate against the risks of unfairness, bias and insufficient robustness. Similarly, the NIST Risk Management Framework establishes standards to support the fairness and robustness of AI systems.

AIDA

AIDA will address the risks of unfairness, bias and insufficient robustness in AI systems in the private sector by adopting a risk-based approach to regulate high-impact systemsmachine learning models and general-purpose systems.

The draft legislation is mostly focused on high-impact systems, the definition of which is left to regulations which have not yet been developed but are expected to be based in part on the severity of potential harms caused by the system. AIDA would require measures for identifying, assessing, mitigating and controlling risks of harm or biased output (which is defined under AIDA in reference to prohibited grounds of discrimination under the Canadian Human Rights Act) resulting from the use of high-impact systems.

The AIDA Companion Document[7] outlines six principles that guide the obligations for high-impact systems under AIDA. Two of these principles are “Fairness and Equity” and “Validity and Robustness”. These principles are derived from fairness and robustness research and compliance with AIDA will require companies to obtain expertise in these areas.

The AIDA Companion Document further explains that “Fairness and Equity means building high-impact AI systems with an awareness of the potential for discriminatory outcomes.” and “Appropriate actions must be taken to mitigate discriminatory outcomes for individuals and groups.” While “Validity means a high-impact AI system performs consistently with intended objectives” and “Robustness means a high-impact AI system is stable and resilient in a variety of circumstances.”

The obligation to comply with risk mitigation measures is set out in section 8 of AIDA:

“Section 8: A person who is responsible for a high-impact system must, in accordance with the regulations, establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system.[8]

Further, developers and operators of high-impact systems will be required to, among other things and as applicable:

  • test the effectiveness of such measures;
  • permit or have human oversight of the system;
  • ensure the system is performing reliably and as intended and is robust (namely, it will continue to perform reliably and as intended, even in adverse or unusual circumstances), in accordance with the regulations;
  • monitor for actual and suspected harm caused by the system; and
  • if there is actual and suspected serious harm, assess the harm and effectiveness of the mitigation measures, cease operation of the system and report the harm to the AI and Data Commissioner in a formal report in accordance with the regulations.

For general-purpose systems, AIDA would require an organization that makes available or manages the systems to establish a written accountability framework, in accordance with the regulations, which must include a description of the personnel who contributed to the system and policies and procedures respecting the management of risks relating to the system, including data use.

Non-compliance with AIDA would carry the risk of substantial monetary penalties. Organizations that do not comply with the obligations imposed by AIDA would be subject to administrative penalties, to be established by the regulations, as well as to a fine of not more than the greater of $10,000,000 or 3% of the organization’s gross global revenues.[9]

EU AI Act

The EU AI Act explicitly contemplates both fairness and robustness of high-risk AI systems in Articles 10 and 15. For instance, Article 10(2) requires that practices for high-risk AI systems include examination of possible biases in the training, testing, and validation data sets and taking appropriate measures to detect, prevent, and mitigate possible biases. Article 15 covers accuracy, robustness, and cybersecurity of high-risk AI systems and requires that these systems “be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.”[10]

The EU AI Act places emphasis on the importance of using high-quality data and regularly monitoring and testing AI systems for accuracy and reliability.

Depending on the circumstances, providers and deployers of high-risk AI systems may be required to, among other things and as applicable:

  • complete fundamental rights impact assessments;
  • implement quality management systems to manage risks;
  • use high-quality data to reduce bias;
  • report incidents;
  • maintain human oversight over the system by individuals who have the necessary competence, training and authority; and
  • ensure the system meets an appropriate level of accuracy, robustness and cybersecurity.

As with AIDA, the EU AI Act provides for the potential of substantial penalties in the event of non-compliance. The Act delegates the rule-making on penalties and other enforcement measures to Member States in Article 99(1), but provides for penalties in the event of non-compliance with prohibited AI practices referred to in Article 5 with fines of up to the greater of 35,000,000 EUR or 7% of total worldwide annual revenue.[11] For major companies, this could result in fines of billions of dollars.

NIST AI Risk Management Framework

NIST has published its AI Risk Management Framework (“NIST AI RMF”)[12] for safely developing and deploying AI. It is organized according to four ‘functions’: Govern, Map, Measure, and Manage. These functions are intended to provide organizations deploying AI systems with actions that can be taken along the AI lifecycle to manage risk and ensure deployment of responsible and safe AI systems. Many of the risks identified, and the associated actions to be taken to mitigate these risks, explicitly or implicitly reference concerns regarding fairness, bias, and robustness of AI systems.

As an example, Measure 2.11 provides guidelines for evaluating and monitoring fairness and bias of the AI system. The description of the standard and the suggested actions are based upon the research into fairness and bias of AI algorithms of the past several years. Similarly, Measures 2.5 and 2.6 discuss concerns about model robustness and validity of system predictions in complex environments which may not reflect the training and testing environments in which the AI system was developed.

NIST AI RMF helps to establish what will be considered to be appropriate or reasonable risk mitigation measures and organizations will be able to draw on these standards to establish internal policies that will support compliance with the law. These standards and regulations will also lay the groundwork for establishing standards of care for negligence litigation and provide courts with models of what responsible AI governance and deployment look like.

Organizations deploying AI systems need expertise

The arrival of regulations and standards, along with a maturing AI ecosystem generally, heralds the end of the wild west era of AI development and deployment. Organizations will need to ensure that they have the relevant expertise to navigate this new environment or they will risk significant liability, both from non-compliance with regulatory regimes and from litigation.

Compliance with these regulations will be challenging for organizations, as it will require both technical and legal expertise with regards to bias, discrimination, and robustness of AI systems. For instance, a determination of what an “appropriate” level of accuracy, robustness, or any other form of risk mitigation measure requires expertise in, and knowledge of the state-of-the-art, risk mitigation measures. The relative scarcity of expertise in these areas, combined with the risks on non-compliance, will make it imperative for organizations who are deploying AI systems to seek out expert advice to ensure that they have policies, frameworks, and technical safeguards in place to comply with what is required under the legislation.

While research into fairness, bias, and robustness of AI algorithms has increased exponentially in recent years, it remains a relatively niche area of research found only at select institutions at the graduate level or within certain tech company research institutes. Knowledge of the state-of-the-art research in these areas is far from widespread and has yet to be integrated into standard computer science curriculums. The result of this is that there is a dearth of expertise in these domains and most organizations are unlikely to have personnel with the technical knowledge to ensure that deployment of AI systems will be in compliance with regulations and standards.

Along with the scarcity of technical expertise in these areas, there is also a lack of legal expertise with regards to emerging AI issues. Organizations will need legal counsel who possess in depth understanding of the technology and how the incoming legal regimes relate to it.

What steps can organizations take?

Organizations that are developing or using AI should take proactive measures to manage the risks associated with AI, including regulatory compliance and litigation risks.

Organizations can start by establishing AI policies or frameworks for the responsible use of AI which are in line with current and expected regulatory requirements and leading standards such as NIST AI RMF. These policies or frameworks should include, among others, AI governance policies, measures for monitoring the output and performance of AI and state-of-the-art fairness and robustness testing for AI models. They should also reflect the input of AI technical and legal experts who have a more direct understanding of the risks and how to effectively manage them.

Organizations should take a tiered approach, based on the evolving regulated risk categories of AI systems, to carry out internal assessments of the AI systems they intend to develop or operate. AI systems should be designed with fairness, risk of bias and robustness in mind. This should involve regular assessments of AI systems to detect and correct biases and maintaining a level of human oversight where an AI system is used to make decisions about people.

While it may be impossible to fully eliminate biases from the development of AI systems, there are effective ways to mitigate bias risk such raising awareness among those who design AI systems so that they recognize and mitigate their own biases by working with a diverse and interdisciplinary team, incorporating research on unconscious bias into the development and training of AI systems, using diverse and current high-quality training data sets and appropriately monitoring AI systems output. It is also important have mechanisms in place to obtain feedback on AI output from those using or impacted by use of AI systems.

Implementing these measures require expertise so it is imperative that organizations making use of AI also assess whether they have sufficient expertise, whether internally or through external engagement and, where they do not, effectively source adequate expertise, taking into account the current limited supply. Recognized AI standards, such as NIST AI RMF, can also help with the development and implementation of these measures, including the determination of their adequacy.

While AI offers undeniable potential to enhance productivity and innovate, navigating its benefits and risks requires a nuanced approach. As the regulatory landscape of AI continues to rapidly evolve, organizations must be mindful of the legal and ethical considerations surrounding AI use, particularly regarding potential bias, discrimination, and robustness concerns. By implementing best practices like those outlined above that are informed by both expert AI technical and legal advice, organizations can harness the power of AI responsibly while mitigating regulatory compliance and liability risks.

APPENDIX

A brief history of modern artificial intelligence

Artificial intelligence is not a well-defined scientific term, and its precise meaning has tended to change over time and in relation to technological developments. For instance, a computer performing simple arithmetic would have once been seen as the cutting edge of AI, whereas now self-driving cars and Large Language Models (“LLMs”) exemplify what we think of as AI.

While the explosion into public consciousness and emergence of AI as an economic powerhouse is a recent phenomenon, the research that underpins the technology actually extends decades, or arguably further. Early modern research into AI can be traced to Alan Turing – the intellectual godfather of the computer – who thought deeply about what it meant to have “artificial intelligence” and worked to develop this AI. After the development of the computer, other researchers – notably Marvin MinskyNathaniel RochesterJohn McCarthy, and Claude Shannon – took up Turing’s mantle and laid the foundations for what would become the modern AI research program.

AI research historically can be broadly divided into two main camps: those who thought expert systems composed of extensive logical or symbolic models, imitating step-by-step reasoning, could simulate human intelligence, and those who thought that designing algorithms that could “learn” from exposure to data was the path to developing AI. The first school of thought is generally referred to as “expert systems” while the second is called “machine learning.”

Early successes in AI were driven by these rule-based expert systems, but as the complexity of problems grew, the limitations of these systems became apparent. As an example, Deep Blue, the chess playing expert system was able to defeat Garry Kasparov in 1997 and was hailed as a significant milestone in the development of AI. Chess is actually a relatively simple game, mathematically, and this allowed expert systems to surpass chess grandmasters. The game Go, on the other hand, which is far more complex mathematically than chess, has proven to be impossible for expert systems to master.

As data became more plentiful and computing power steadily improved, the machine learning school of AI began to see more and more success. By the 2000s, techniques like linear and logistical regression, support vector machines, and random forests were standard across a wide range of disciplines. However, the field of AI was still limited to narrow and discrete problems. Then, in 2012, modern AI had its breakout moment. University of Toronto PhD students Alex Krizhevsky and Ilya Sutskever, along with their supervisor, Geoffrey Hinton, entered a deep learning based computer vision model into the annual AI image classification image known as the ImageNet challenge. Their model, known as AlexNet[13], represented a novel deep convolutional neural network[14] architecture and it blew the field away. This moment heralded the beginning of the deep learning[15] revolution and the arrival of AI as a transformative technology.

Deep learning is a subset of machine learning based on algorithms known as artificial neural networks that were inspired by the biological neural networks in human brains. The performance of AlexNet sparked interest and investment into AI research, and the big tech companies began to invest heavily in deep learning. Modern AI is now essentially all machine learning, and also almost entirely based on deep learning. It is in fact related deep learning algorithms that underpin ChatGPT and the other LLMs which currently capture the public imagination and are attracting enormous capital investment.[16] It was also a deep reinforcement learning system surpassed Go grandmasters back in 2016.[17]

The incredible progress of AI performance, driven by this deep learning revolution was not merely a research curiosity. AI powered tools and applications quickly began to generate enormous economic value. It is hard to overstate the significance of the AI wave in the past decade – AI went from a being a fringe research area in academic laboratories to underpinning the world’s largest corporations. At the time of writing, the world’s six largest public companies by market capitalization (i.e., Microsoft, Apple, Nvidia, Alphabet, Amazon, Meta) all have business models that depend heavily on AI.

Fairness, bias, and robustness in AI algorithms

As the performance of AI models improved, businesses began to deploy AI powered applications and tools at a frenetic pace. These tools create enormous amounts of value and hold the potential for nearly infinite upside, but they also introduce new risks and failure modes. One area of concern that quickly became apparent was the potential for AI models to exhibit biased or discriminatory outputs.

As explained above, modern AI applications are driven by machine learning models. Because machine learning algorithms learn from the data on which they are trained they are susceptible to incorporating, and sometimes amplifying, bias found in the dataset on which they are trained. The risk of training a biased algorithm is sometimes obvious – for instance when you are training a model to make predictions about people based on a dataset that contains a prior history of discriminatory decision making – but it is also possible for bias to creep into machine learning models in much more subtle and unintuitive ways.

Real world examples of AI models that exhibit discriminatory behaviour or performance are not hard to find. Several major tech companies have released facial recognition tools which were found to perform very well for white male faces, but very poorly for black female faces. Similarly, tech companies have utilized tools for hiring that have later been found to exhibit bias based on gender or race, and companies releasing language and image generation tools have routinely struggled to ensure the models do not reproduce bias or discrimination contained in the datasets on which the models are trained.

The AI research community quickly realized the potential for algorithms to reproduce, and even amplify, bias and discrimination and lead to inequitable outcomes. In response to this concern, researchers attempted to define, in mathematical terms, what it means for an algorithm to be fair. These attempts had mixed success and resulted in the development of several definitions or metrics that an algorithm could be compared to in order to determine if its outputs are discriminatory.[18] These fairness metrics are somewhat limited in their applicability, and as fairness is inherently a contested philosophical and political concept, reducing it to a single mathematical definition requires making certain normative assumptions.

Along with concerns over fairness, it quickly became apparent that while AI models displayed stunning performance in the relatively controlled testing environments and on benchmarking tasks, the complexity of real world applications often revealed AI models to be brittle and lack the robustness required to safely deploy them in high stakes scenarios.[19] This realization sparked research into methods to make AI models more robust and the development of sophisticated techniques to ensure that model performance is robust to the complexities and challenges of real world applications.

Shockingly, researchers discovered that “adversarial attacks” on machine learning models could result in wildly inaccurate predictions for even the best models.[20] For example, it is possible to distort images in a way that is imperceptible to humans, but will cause an otherwise accurate computer vision tool to radically alter its prediction of the image it is shown. Even outside of adversarial machine learning, however, the performance of highly touted AI systems has often been somewhat disappointing, as developers have often underestimated the complexity of the environment in which the models are deployed. 

Despite the increased amount of research in this area in the past few years, there remains a lack of comprehensive understanding of how pertinent concepts of bias or discrimination should be understood in the context of AI and what measures to combat bias and discrimination are both realistically possible and justified. Much more research in this area is required.

 __

[1] S. Barocas and A. Selbst, “Big Data’s Disparate Impact”, 104 California Law Review 671 (2016).

[2] https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-discriminatory-hiring-suit

[3] See e.g. S. Barocas, M. Hardt, and A. Narayanan, Fairness and Machine Learning: Limitations and Opportunities. MIT Press, 2023, https://www.fairmlbook.org.

  1. Chouldechova, “Fair prediction with disparate impact: A study of bias in recidivism prediction instruments,” Big data, vol. 5 2, pp. 153–163, 2017; S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq, “Algorithmic decision making and the cost of fairness,” ser. KDD ’17, Association for Computing Machinery, 2017, 797-806, isbn: 9781450348874. doi: 10.1145/3097983.3098095.; C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, “Fairness through awareness,” in Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ser. ITCS ’12, Association for Computing Machinery, 2012, 214 226, isbn: 9781450311151. doi: 10.1145/2090236.2090255.; M. Hardt, E. Price, and N. Srebro, “Equality of opportunity in supervised learning,” in NIPS, 2016; R. Berk, H. Heidari, S. Jabbari, M. Kearns, and A. Roth, “Fairness in criminal justice risk assessments: The state of the art,” Sociological Methods & Research, vol. 50, no. 1, pp. 3–44, 2021. doi: 10.1177/0049124118782533.; M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi, “Fairness beyond disparate treatment and disparate impact: Learning classification without disparate mistreatment,” in Proceedings of the 26th International Conference on World Wide Web, ser. WWW ’17, International World Wide Web Conferences Steering Committee, 2017, 1171–1180, isbn: 9781450349130. doi: 10.1145/3038912.3052660.; J. Kleinberg, S. Mullainathan, and M. Raghavan, “Inherent trade-offs in the fair determination of risk scores,” in 8th Innovations in Theoretical Computer Science Conference (ITCS 2017), C. H. Papadimitriou, Ed., ser. Leibniz International Proceedings in Informatics (LIPIcs), vol. 67, Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2017, 43:1–43:23, isbn: 978-3-95977-029-3. doi: 10.4230/LIPIcs.ITCS.2017.43.; B. Woodworth, S. Gunasekar, M. I. Ohannessian, and N. Srebro, “Learning non-discriminatory predictors,” in Proceedings of the 2017 Conference on Learning Theory, S. Kale and O. Shamir, Eds., ser. Proceedings of Machine Learning Research, vol. 65, PMLR, 2017, pp. 1920–1953.

[4] See e.g. https://www.fairly.ai/blog/map-of-global-ai-regulations

[5] European Union Artificial Intelligence Act (2024) https://artificialintelligenceact.eu/ai-act-explorer/ [EU AI Act].

[6] Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Session, 44th Parliament, 2021, https://www.parl.ca/legisinfo/en/bill/44-1/c-27, and proposed amendments, https://www.ourcommons.ca/content/Committee/441/INDU/WebDoc/WD12751351/12751351/MinisterOfInnovationScienceAndIndustry-2023-11-28-Combined-e.pdf. [AIDA]

[7] The Artificial Intelligence and Data Act (AIDA) – Companion Document, Innovation, Science and Economic Development Canada, https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document. [Companion]

[8] AIDA at s. 8.

[9] See AIDA at s. 30.3.

[10] EU AI Act at Article 15.

[11] Ibid at Article 99(1).

[12] NIST AI RMF Playbook, https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook, [NIST].

[13] A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks”, in NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems – Volume 1, p 1097-1105.

[14] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998, doi: 10.1109/5.726791.

[15] LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015). https://doi.org/10.1038/nature14539.

[16] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 6000–6010.

[17] Silver, D., Huang, A., Maddison, C. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016). https://doi.org/10.1038/nature16961.

[18] See supra at note 2.

[19] See e.g. D. Heaven, “Why Deep-Learning Ais are so Easy to Fool”, Nature news feature 2019, https://www.nature.com/articles/d41586-019-03013-5.

[20] See e.g. Szegedy, Christian; Zaremba, Wojciech; Sutskever, Ilya; Bruna, Joan; Erhan, Dumitru; Goodfellow, Ian; Fergus, Rob (2014-02-19). “Intriguing properties of neural networks”.; Biggio, Battista; Roli, Fabio (December 2018). “Wild patterns: Ten years after the rise of adversarial machine learning”. Pattern Recognition. 84: 317–331.; Kurakin, Alexey; Goodfellow, Ian; Bengio, Samy (2016). “Adversarial examples in the physical world”.

'Discrimination
AI bias
AI fairness
AI robustness
algorithms
artificial intelligence
Artificial Intelligence and Data Act
bias
Colorado AI Act
human rights laws
machine learning
NIST Risk Management Framework
regulate AI
Share

Related Posts

Imagen 1

Employees with disabilities – accommodation strategies (Part I)

Accommodating employees with disabilities to the point of undue hardship under human rights legislation can be a complicated task. It’s important to make sure the accommodation process goes smoothly and the employee can focus on working as efficiently as possible, but employers may not be sure about what kinds of questions to ask disabled employees in order to meet their needs.

Christina Catenacci, BA, LLB, LLM, PhD

Read more
Imagen 1

Slaw: Canadian Human Rights Commission’s controversial ‘anti-hate’ policy

The Canadian Human Rights Commission recently posted a policy on its website concerning how it interprets and applies section 13 of the Canadian Human Rights Act (CHRA) when it receives an inquiry or complaint. The purpose of section 13 of the Act is to balance Canadians’ rights to equality and freedom of expression with respect to hate messages, as protected by the Canadian Charter of Rights and Freedoms. The parliamentary record indicates that section 13 was initially included in the legislation to address activities of individuals and groups who used the telephone system to disseminate hate messages. In December 2001, parliament amended the CHRA by adding section 13(2), which makes it clear that Internet hate messages come under the jurisdiction of the commission.

Read the whole article on Slaw.ca.

Marie-Yosie Saint-Cyr, LL.B. Managing Editor

Read more
Imagen 1

The new age of workplace gossip – TMI!

I’ve discussed workplace gossip here before, and what bosses can do to prevent it or at least reduce the potential harm, but there are a couple of hyper-modern developments that I didn’t get into: reality television and the Internet. These two things have created a culture of “sharing”, for lack of a better word, that encourages people at play or work to divulge the most mundane and private details of their lives to others—the kind of information that one previously might only have shared with family or best friends.

Adam Gorley

Read more