How does ISO/IEC 42001 impact AI governance?

How does ISO/IEC 42001 impact AI governance?

This article discusses ISO/IEC 42001 (Standard), and what this means for Canadians working in the area of AI (artificial intelligence).

What is ISO/IEC 42001?

The ISO/IEC 42001 (Standard) is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System within organizations. It is designed for entities who provide or utilize AI-based products or services.

More specifically, an AI Management System is a set of interrelated or interacting elements of an organization intended to establish policies and objectives, as well as processes to achieve those objectives, in relation to the responsible development, provision or use of AI systems. The Standard specifies the requirements and provides guidance for establishing, implementing, maintaining and continually improving an AI Management System within the context of an organization.

The Standard is important because it is the first AI Management System standard—it provides invaluable guidance to help organizations navigate the AI terrain. In fact, it addresses unique AI challenges such as ethical and transparency considerations. It provides a structured method of managing AI risks and opportunities. The goal is to manage risks and opportunities while simultaneously balancing them with innovation and AI governance.

In addition to helping organizations be more prepared for Bill C-27’s AIDA, the Standard provides a framework and helps organizations create a plan for their responsible and effective use of AI. This, in turn, leads to increased transparency and reliability, as well as cost savings and efficiency gains for organizations of any size who plan on developing, providing, or using AI-based products or services across all industries.

How does this standard affect Bill C-27, namely AIDA?

As we are all aware, Bill C-27 (proposed privacy and AI legislation) was first introduced in the House of Commons in June 2022 after Bill C-11 (proposed privacy legislation) died on the order paper. Since then, there was second reading of Bill C-27 in April 2023, and subsequently it was sent to the Committee on Industry and Technology. Interested parties made submissions.

However, disappointingly, not much has transpired since—as other jurisdictions sped right by Canada and left it in the dust—unless you count the multiple confusing and convoluted amendments that have been made to AIDA in the Committee. Last I heard when I listened to Michael Geist’s Law Bytes podcast there had been a commencement of line-by-line reading in Committee.

According to the ISO website, implementing the Standard can help organizations with the following:

  • Responsible AI that ensures ethical and responsible use of AI
  • Reputation management in that it enhances trust in AI applications
  • AI governance to support compliance with legal and regulatory requirements
  • Practical guidance for managing risks
  • Identifying opportunities to innovate within a structured framework

Consequently, implementing the Standard can bolster organizations’ ability to comply with any AI legislation that Canada ultimately enacts. In fact, it may go a long way to help Canadian organizations comply with something that has been brewing for years in the midst of significant non-action on the part of the government.

What can we take from this going forward?

It is important to note that ISO/IEC has released other important standards that work in conjunction with the Standard in relation to AI as discussed above:

  • ISO/IEC 22989: establishes common-language definitions of AI-related terminology and outlines emerging concepts in AI.
  • ISO/IEC 23053: establishes a framework for describing generic AI systems that use machine learning technology, which promotes interoperability among AI systems and their components.
  • ISO/IEC 23894: establishes guidance for managing AI-related risks in organizations developing deploying AI products and services, outlining processes for integrating AI risk management strategies into organizational activities, as well as helping to identify, assess, and mitigate these risks.

It is recommended that organizations also take a closer look at these standards. Similarly, organizations are encouraged to:

  • understand their internal and external environments when identifying the needs and expectations of stakeholders.
  • establish clear AI policies, define roles and responsibilities, and integrate AI governance into its overall strategic objectives.
  • be proactive and identify risks and opportunities from the outset.
  • think about resource allocation from the outset (financial, technological, and HR).
  • implement processes for responsible AI development, deployment, and use throughout the lifecycle.
  • monitor performance continuously and evaluate in terms of accuracy and compliance.
  • always look to continuously improving the process and systems.
AI governance
AI management system
artificial intelligence
ISO/IEC 42001
Share

Related Posts

Imagen 1

The new age of workplace gossip – TMI!

I’ve discussed workplace gossip here before, and what bosses can do to prevent it or at least reduce the potential harm, but there are a couple of hyper-modern developments that I didn’t get into: reality television and the Internet. These two things have created a culture of “sharing”, for lack of a better word, that encourages people at play or work to divulge the most mundane and private details of their lives to others—the kind of information that one previously might only have shared with family or best friends.

Adam Gorley

Read more
Imagen 1

Privacy risk management – by design

I’ve discussed the Privacy by Design principle before, in the Inside Internal Control newsletter. In case you don’t know, PbD is an approach developed by Dr. Ann Cavoukian, the Privacy Commissioner of Ontario, which proactively embeds privacy protection by default in the design of an organization’s practices and products.

Colin Braithwaite

Read more
Imagen 1

Employers discussing employee medical condition with other employees

In general, an employer, manager, supervisor or HR professional discussing an employee’s medical condition with other employees is just plain inappropriate…

Marie-Yosie Saint-Cyr, LL.B. Managing Editor

Read more