Top GenAI Success Stories for Risk and Assurance Professionals, and the Crucial Caveats You Need to Know

Author: Richard Clapton, CISA, IACERT, MICA
Date Published: 22 August 2024
Read Time: 4 minutes

What is the best way to react to finding out your taxi driver colleague drove into a river because his Sat-Nav told him it was the right way to go? If your answer was, “Come to work wearing a snorkeling mask the next day,” then you were right and can be rewarded with knowing you have some classic British humor. This was one of the first articles I remember reading about automation bias all the way back in 2008!

As soon as technology systems have been able to give us guidance, us fallible humans have relinquished control of our decision-making, even in the face of obvious evidence (such as say … a large body of water literally right in front of you) and have been susceptible to automation bias.

I recently wrote an article in the ISACA Journal, “When Computer Says No,” which discusses the risks of automation bias and the importance of continuous professional development to help companies achieve their objectives.

The article focused on how risk and assurance professionals can add value to their organizations and highlighted useful resources and considerations in the constantly evolving world of AI (sounds like a good read, right?). 

In this blog post, I’m turning the focus onto the risk and assurance professionals, outlining how AI, and specifically Generative AI, can be a huge time-saver and idea-generator. But be mindful, it can also lead to the same blind faith that led our taxi driver into that river.

I’ve been lucky enough to test some of the enterprise versions of the latest and greatest generative AI products on the market at the moment. Here are a few takeaways:

  1. Automated meeting notes can be a huge time-saver. As an auditor I am used to asking questions, writing notes and thinking through potential risks to follow up on. But having some free headspace to focus on what an auditee is saying is a great opportunity area.

    Caveat: You still need to read through your notes afterwards, especially if you have multiple accents, some fast-talkers and/or the call quality isn’t great. A few misunderstood words and the AI starts to guess on the context. A meeting about encryption and zip compression soon becomes notes on Taylor Swift and the whereabouts of relevant posters (true story). 
  1. Some models can help with data analytics. They do this by generating code in languages such as Python and performing the tests in their own environment. This is great because it allows you to rerun the tests if you have your own environment.

    Caveat: You need to understand the data that you are using and be clear about the assumptions and tests you want to perform. Asking the AI to group spend based on location is easy enough, but it might not realize that there is a separate column that includes the currency. This can lead to some confusion, like asking yourself why your Japanese team is spending 20K on a team dinner (because it is in Yen!).
  1. Searching the web for useful examples and/or research. An excellent way to help a report hit home is to have real-life examples of when similar risks have materialized at other companies. Nothing says “You should have a ransomware incident response process” like the biggest leak of NHS patient data in years.

    Caveat: Make sure to validate the sources! I have found that sometimes the example provided are exactly what I’m looking for and appear to come from a real website. However, when I visit the site, I can’t find the supposed source. I’m still not 100% sure if the source is somehow behind a paywall (which raises a whole list of other questions) or if it has just made it up!
  1. Assisting during scoping and planning. This is especially helpful if you are reviewing an area that is new to you or even something you’ve done many times before but are looking for new angles to consider.

    Caveat:
    You will get a lot more out of the GenAI tool if you incorporate your understanding of the environment you’re working in. For example, if you were to perform a money laundering review at a fully remote, cashless business, you would not need to ask about cash-handling measures in stores. If these kind of irrelevant questions aren’t filtered out, you are likely to lose the respect of the team or individuals you are reviewing, and any reporting may not be taken seriously.

This shouldn’t come as a surprise, but just because we have this new, incredibly smart, useable and useful black box AI friend, doesn’t mean we should take off our skeptical hat and take a shortcut through a large body of water. We still need to understand the environment we’re working in, shape our reviews in the context of the organization, validate sources and integrate outputs. These are all skills that risk and assurance professionals have in abundance and will help us thrive in the GenAI world. 

Additional resources