Internal audit and generative AI

Internal audit and generative AI

I’m not going to get involved in the debate about whether internal audit should be leaping (hopefully forward) to leverage AI in our work.

I remain convinced that we should understand the more significant risks to enterprise objectives, identify the audits we want to perform, and only then select the best tools for the job – which may or may not include AI.

AI may be great at detecting errors or even fraud and cyber breaches. But that is management’s job, not internal audit’s job.

Our job is to provide assurance, advice, and insight.

That can include:

  • Whether they have appropriate controls and security over the use of AI
  • Whether they are optimizing the use of technology in general
  • Whether they have the ability to know when to use what

With that last in mind, I am sharing two pieces you might enjoy:

Here are just a few nuggets:

  • Incorrectly used, AI may make up facts, be prejudiced, and leak data. In board packs, this means a real risk for directors of being misled or failing to discharge regulatory duties.
  • …we can easily mistake it for an “everything” tool and use it on the wrong problems. And when we do, our performance suffers. A Harvard study showed this in action, taking smart, tech-savvy BCG consultants and asking them to complete a range of tasks with and without generative AI tools. The consultants were 19 percentage points less likely to reach correct conclusions when using generative AI on tasks that appeared well-suited for it but were actually outside of its capabilities. In contrast, on appropriate tasks, they produced 40% higher quality results and were 25% quicker. The researchers concluded that the “downsides of AI may be difficult for workers and organizations to grasp.”
  • …because AI models reflect the way humans use words, they also reflect many of the biases that humans exhibit
  • …while AI is great at making its answers appear plausible and written by a human, the way they’re generated means that they’re not necessarily factually correct — the model simply extrapolates words from its training data and approximates a solution. As Dr Haomiao Huang, an investor at renowned Silicon Valley venture firm Kleiner Perkins, puts it: “Generative AI doesn’t live in a context of ‘right and wrong’ but rather ‘more and less likely.’”
  • …in leading the finance function, the CFO can’t implement gen AI for everyone, everywhere, all at once. CFOs should select a very small number of use cases that could have the most meaningful impact for the function.
  • The best CFOs are at the vanguard of innovation, constantly learning more about new technologies and ensuring that businesses are prepared as applications rapidly evolve. Of course, that doesn’t mean CFOs should throw caution to the wind. Instead, they should relentlessly seek information about opportunities and threats, and as they allocate resources, they should continually work with senior colleagues to clarify the risk appetite across the organization and establish clear risk guardrails for using gen AI well ahead of the test-and-learn stage of a project.

Is management sufficiently ‘intelligent’ to know when and where to use AI for maximum ROI?

Are you helping? Or are you auditing them after the fact, shooting the wounded?

artificial intelligence
cyber breaches
data leaks
fraud
generative AI
internal audit
optimal use of technology
Share

Related Posts

Imagen 1

The new age of workplace gossip – TMI!

I’ve discussed workplace gossip here before, and what bosses can do to prevent it or at least reduce the potential harm, but there are a couple of hyper-modern developments that I didn’t get into: reality television and the Internet. These two things have created a culture of “sharing”, for lack of a better word, that encourages people at play or work to divulge the most mundane and private details of their lives to others—the kind of information that one previously might only have shared with family or best friends.

Adam Gorley

Read more
Imagen 1

Privacy risk management – by design

I’ve discussed the Privacy by Design principle before, in the Inside Internal Control newsletter. In case you don’t know, PbD is an approach developed by Dr. Ann Cavoukian, the Privacy Commissioner of Ontario, which proactively embeds privacy protection by default in the design of an organization’s practices and products.

Colin Braithwaite

Read more
Imagen 1

Employers discussing employee medical condition with other employees

In general, an employer, manager, supervisor or HR professional discussing an employee’s medical condition with other employees is just plain inappropriate…

Marie-Yosie Saint-Cyr, LL.B. Managing Editor

Read more