Internal audit is often the first line of assurance for emerging risks. Artificial intelligence is no exception. Over the past two years, auditors have begun experimenting with AI for tasks such as anomaly detection, text review, and evidence extraction. Adoption is accelerating. Wolters Kluwer reported in its 2024 survey that 39 percent of internal audit leaders already use AI in their functions, and another 41 percent plan to adopt it within twelve months (Wolters Kluwer, 2024).

This is an encouraging signal. Yet there is a gap between adoption and integration. The UK’s Financial Reporting Council (FRC) has monitored how firms apply AI in statutory audits. Its 2023 review noted that while the Big Four and other large audit firms increasingly rely on AI-enabled tools, most were not systematically evaluating how these tools affected audit quality (Financial Reporting Council, 2023). Without integration into existing assurance frameworks, the use of AI risks becoming a parallel track, delivering efficiency but not accountability.

The AuditBoard Blueprint to Reality study reinforces this concern. Although internal auditors are adopting technology quickly, the broader enterprise governance structures often lag behind. Only a minority of organizations connect AI adoption with enterprise-wide AI governance, leaving internal auditors without the cross-functional evidence needed to test effectiveness (AuditBoard, 2024).

These findings resonate with what many of us see in practice. Tools proliferate, but evaluation is inconsistent. Audit functions become early adopters yet lack the mechanisms to translate their experiences into enterprise learning. As peers, we need to ask: how can internal audit’s use of AI avoid becoming another silo?

Several solutions are emerging. One promising approach is the establishment of AI assurance dashboards. Rather than using AI tools in isolation, internal audit integrates their outputs into metrics that are tracked alongside traditional quality indicators. This allows teams to measure not only efficiency gains but also impacts on assurance reliability. The FRC’s warning about unmeasured quality (Financial Reporting Council, 2023) becomes less acute when dashboards make those impacts visible.

Another solution is to build “audit of the audit” functions. Just as we subject financial controls to independent testing, AI-enabled audit processes should themselves be audited. This second layer ensures that tools do not introduce unrecognized bias or overfit to patterns auditors expect to find. The recent review by Ojewale et al. (2024) on AI accountability infrastructure highlights that most audit tools are insufficiently validated in real-world settings. Independent evaluation is therefore essential.

A third approach involves cross-functional integration. Internal audit should not operate alone when adopting AI. By linking audit outputs to enterprise AI governance frameworks, evidence generated by auditors can feed directly into risk registers, compliance dashboards, and escalation playbooks. PwC’s 2024 Responsible AI Survey shows that only a minority of organizations are achieving this integration today (PwC, 2024). The opportunity is to turn internal audit’s experiments into enterprise-wide governance evidence.

What stands out is that these solutions are not theoretical. They are being implemented in pieces across firms. Dashboards, independent testing, and governance integration are all visible in practice, though unevenly. The challenge is how to make them standard rather than exceptional.

As peers, we should also acknowledge what is missing. We do not yet have widely shared metrics for evaluating AI-enabled audits, nor do we have agreed repositories of validated tools. These gaps make it harder to benchmark and harder to reassure regulators. The field would benefit from greater collaboration across audit leaders, regulators, and standards bodies.

For those already integrating AI into internal audit, what has worked? Where have you encountered resistance? And for those still considering adoption, what questions do you want answered before you take the leap?

The value of peer exchange is that it moves us beyond isolated experiments. By comparing practices, we can accelerate the shift from adoption to integration and avoid repeating the same mistakes. AI in internal audit will only earn trust if it is measured, audited, and embedded into the broader governance system. I look forward to learning from how others are approaching this challenge.


References

AuditBoard. (2024). From blueprint to reality: The state of AI governance programs. AuditBoard. https://auditboard.com/blog/new-research-finds-only-25-percent-of-organizations-report-a-fully-implemented-ai-governance-program

Financial Reporting Council. (2023). The use of technology in the audit of financial statements. FRC. https://www.frc.org.uk

Ojewale, A., Steed, C., & Raji, I. D. (2024). Towards AI accountability infrastructure: Gaps and opportunities in AI audit tooling. arXiv. https://arxiv.org/abs/2402.17861

PwC. (2024). Responsible AI survey: How US companies are managing risk in an era of generative AI. PwC. https://www.pwc.com/us/en/tech-effect/ai-analytics/responsible-ai-survey.html

Wolters Kluwer. (2024). Internal audit technology survey: The state of AI adoption. Wolters Kluwer. https://www.wolterskluwer.com

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram