Artificial intelligence governance is often discussed in abstract terms. Policies, frameworks, and principles dominate the conversation, while practical examples of how large organizations implement governance remain rare. The ethics-based audit initiative at AstraZeneca offers one of the most detailed case studies of operationalizing AI governance in practice. Examining this example provides insights into both the possibilities and the challenges that organizations face when attempting to move beyond policy statements.

Barth and colleagues (2024) documented AstraZeneca’s twelve-month pilot of ethics-based auditing across its decentralized structure. The goal was to test whether ethical principles could be translated into operational requirements and measurable outcomes. The research revealed that technical considerations—such as model documentation and bias testing—were necessary but not sufficient. The greatest obstacles were structural and cultural.

Harmonization proved difficult across AstraZeneca’s distributed units. Different business segments interpreted governance standards in divergent ways, leading to inconsistent implementation. The case study found that achieving alignment required extensive internal communication and repeated negotiation of scope. This illustrates a common challenge in global organizations: decentralized autonomy collides with the need for consistent evidence when regulators demand clarity.

Change management also emerged as a central theme. Employees often feared that governance would slow innovation or expose vulnerabilities. Building trust required positioning audits not as punitive exercises but as processes designed to protect both the company and the individuals affected by its AI systems. Without that cultural shift, governance risked being viewed as an external imposition.

The AstraZeneca study further demonstrated that ethics boards alone are insufficient. While boards provided oversight, their decisions carried weight only when linked to operational processes. For example, requiring sign-off on risk registers or incident escalation procedures ensured that ethics considerations were embedded in real decision chains rather than treated as symbolic gestures.

From this case, several lessons stand out. First, governance must be federated. Central standards provide consistency, but local “nodes” inside business units are necessary to translate principles into practice and report evidence back to the center. Second, communication must be continuous. Governance cannot be managed through annual reviews or occasional audits; it requires ongoing dialogue and iterative adjustment. Third, change management must be deliberate. Training, framing, and leadership engagement are essential to shift governance from perceived burden to accepted practice.

The broader literature reinforces these findings. The OECD (2023) has warned that ethical principles fail when they are not embedded into organizational processes, highlighting the need for accountability mechanisms that reach operational levels. PwC’s 2024 survey similarly shows that while most organizations have responsible AI principles, fewer than half have embedded them into enterprise functions (PwC, 2024). Together with the AstraZeneca case, this evidence suggests that the challenge is not conceptual design but translation into organizational practice.

The implication is that effective AI governance cannot be imported as a ready-made solution. It must be cultivated inside the organization, adapting to its structures, culture, and incentives. Large enterprises should anticipate that implementation will require negotiation across units, investment in communication, and sustained leadership commitment. Evidence generation will only be credible when principles, processes, and culture align.

The AstraZeneca case provides reassurance that this work is feasible. But it also serves as a caution that governance will fail if reduced to policies or dashboards. It is the lived process of negotiation, communication, and integration that produces assurance regulators can trust and that society expects.


References

Barth, S., Wanner, D., Zimmermann, H., & Andreeva, J. (2024). Operationalising AI governance through ethics-based auditing: An industry case study. arXiv. https://arxiv.org/abs/2407.06232

Organisation for Economic Co-operation and Development. (2023). OECD framework for the classification of AI systems. OECD Publishing. https://oecd.ai

PwC. (2024). Responsible AI survey: How US companies are managing risk in an era of generative AI. PwC. https://www.pwc.com/us/en/tech-effect/ai-analytics/responsible-ai-survey.html

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram