Boards are increasingly asking a blunt question: “Do we even know where AI is being used across our enterprise?” It is not a trivial concern. Research consistently shows that most AI activity inside organizations is hidden from leadership, compliance, and even IT security. This “shadow AI” presents a serious governance and risk blind spot.

Lanai’s 2025 study found that 89 percent of enterprise AI usage is invisible to IT and security teams, often because employees adopt generative AI tools informally or embed third-party AI into workflows without disclosure (HelpNetSecurity, 2025). This aligns with findings from AuditBoard’s 2024 Blueprint to Reality survey, which reported that unclear ownership and fragmented oversight leave many organizations unaware of their true AI footprint (AuditBoard, 2024). PwC’s 2024 Responsible AI Survey reinforces the concern, showing that board members are asking not only about compliance but also about visibility into where AI is already operating (PwC, 2024).

The governance implications are serious. If an organization cannot identify its AI systems, it cannot classify their risk, map them to regulatory frameworks, or prepare evidence for regulators. The EU AI Act, ISO/IEC 42001, and NIST AI RMF all require system mapping as a foundational step. The OECD has warned that without reliable classification of AI systems, governance principles remain aspirational rather than operational (OECD, 2023).

Addressing this visibility crisis requires layered solutions. The first step is establishing an AI asset inventory, parallel to how enterprises manage hardware and software assets. Each system must be logged with ownership, purpose, and data dependencies. The second step is continuous discovery. Static inventories quickly become obsolete as new tools are adopted. Automated discovery mechanisms, combined with disclosure obligations for business units, keep the inventory alive. A third step involves embedding governance “nodes” within business units, so that AI adoption is reported and aligned rather than hidden. Finally, organizations must link visibility to accountability. Systems discovered must be mapped to risk registers, documented in evidence packs, and aligned with regulatory frameworks.

Boards are right to be concerned. Without visibility, governance is impossible, and reputational and regulatory risks multiply. Research shows the problem is widespread. The path forward is to make AI visibility as routine as financial reporting or cybersecurity monitoring. Only then can organizations answer the board’s question with confidence.


Recommended Reading


References

AuditBoard. (2024). From blueprint to reality: The state of AI governance programs. AuditBoard. https://auditboard.com/blog/new-research-finds-only-25-percent-of-organizations-report-a-fully-implemented-ai-governance-program

HelpNetSecurity. (2025). Most enterprise AI use is invisible to security teams. HelpNetSecurity. https://www.helpnetsecurity.com/2025/09/15/lanai-enterprise-ai-visibility-tools/

Organisation for Economic Co-operation and Development. (2023). OECD framework for the classification of AI systems. OECD Publishing. https://oecd.ai

PwC. (2024). Responsible AI survey: How US companies are managing risk in an era of generative AI. PwC. https://www.pwc.com/us/en/tech-effect/ai-analytics/responsible-ai-survey.html


linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram