Artificial Intelligence in Internal Audit
The rapid development of AI across industries also brings with it the emergence of potential risks, prompting organizations to adapt their control functions to the use of AI. In this context, the role of the internal audit function has expanded to ensure the integrity, fairness, and transparency of AI systems, as well as their responsible use within the organization and the appropriate mitigation of risks.
Artificial Intelligence in Internal Audit
Management Solutions’ value proposition for the Internal Audit function focuses on two areas:
-
Auditing AI: with our deep expertise in AI models, risk management, data, and regulation, we are ideally suited to assist in reviewing cross-cutting elements of the AI framework, reviewing intersecting risks, and conducting end-to-end reviews of AI systems and AI-related risks.
-
AI for Internal Audit: we help design and develop AI tools to support internal audit activities, both with our AI-powered proprietary solution SIRO for the management of the end-to-end internal audit lifecycle, and with tailor-made tools for testing functions, audit report generation, and other activities (e.g. chatbots for regulation or internal policies).
All of this allows us to help our clients identify and address issues related to the rapid adoption of artificial intelligence systems in a rapidly evolving regulatory landscape, which in our experience can be grouped into four areas:
Within the context of artificial intelligence audits, in the relatively early adoption stage in which organizations find themselves, the Audit areas are raising findings in several areas:
-
AI organization and governance, including a nascent and inadequate AI governance structure; AI risk management roles and responsibilities not clearly defined (including the role of the second line of defense of intersecting risks), etc.
-
AI risk and compliance, including an incomplete inventory of AI solutions, lack of a standardized AI risk classification system and an adequate management tool (e.g. Gamma); lack of controls over key AI aspects such as explainability and fairness; lack of compliance with regulatory requirements (e.g. AI Act); outdated regulatory compliance catalogue due to fast-evolving regulatory landscape; etc.
-
Technology and data for AI, including lack of controls over dependency and concentration on external AI providers; existence of end-user AI environments (e.g. sandboxes) that are not compliant with policies and have insufficient cost control; lack of controls to prevent external solutions from accessing confidential data, etc.
-
AI culture, including insufficient training and awareness programs on AI across the organization (in particular, lack of AI training for Board and Senior Management).