top of page
Men in Suits

Digital Governance After the AI Act: What Boards Need to Know

The Artificial Intelligence bill by the European Union has been passed and is a law now. Some provisions made by the act are in the process of being rolled out, some in the refinement phase through secondary guidance and parallel reforms, while some are already in practice. This brings about a governance moment that is familiar yet uncomfortable for boards of directors. Directors have to take matters like digital systems and artificial intelligence seriously at the oversight level, all while the regulatory and jurisprudence ground is undergoing a massive transformation.


The central question for directors is no longer whether artificial intelligence matters. That debate is over. AI is embedded in pricing, hiring, credit decisions, supply chains, customer engagement, fraud detection and content moderation. The real challenge now is how boards should exercise credible oversight—without turning the boardroom into a technology lab or drifting into what many practitioners now describe as “AI theatre,” where ethics statements and glossy demos substitute for actual governance.


Corporate law has not changed its core expectations. Boards are expected to govern systems, information flows, accountability and risk—not to design models or select algorithms. The AI Act does not rewrite directors’ duties but it does give sharper content to what reasonable digital governance looks like. For companies within its scope, the Act effectively raises the floor for board-level oversight of artificial intelligence, data and critical digital systems.


This article takes an operational view. Instead of revisiting doctrinal debates, it asks a narrower but more practical question: what does a minimal, defensible digital-governance architecture look like for boards after the AI Act—and how can it respect the boundary between oversight and management?


A futuristic corporate boardroom where directors sit around a high-tech table featuring a glowing digital shield labeled "EU AI Act." In the center of the table, a holographic cube marked "AI" sits atop interlocking mechanical gears. In the background, large digital displays show infographics titled "EU Act: Implementation Timeline" and "Oversight vs. Management," symbolizing the integration of AI regulation into corporate governance.
Governance Evolved: How the EU AI Act defines the modern boardroom.

The AI Act Is in Force but the Rulebook Is Still Moving

One of the defining features of the AI Act is its staggered implementation. Rather than a single “go-live” date, the regulation unfolds in phases, aligned to risk categories and system types.


Certain practices deemed unacceptable—such as specific forms of social scoring and highly intrusive emotion-recognition uses—have already been banned since early 2025. Obligations for providers of general-purpose AI models, including transparency and model documentation requirements, began to apply from August 2025. High-risk system obligations, which are the most operationally demanding for most enterprises, will take effect gradually across 2026 and 2027.


The European Commission has published an implementation timetable, while also proposing a broader Digital Omnibus package that may simplify parts of the wider digital rulebook and extend some deadlines. The result is that boards are being asked to design governance systems mid-rollout—with one eye on current obligations and another on requirements that are not yet fully crystallised.


From a governance perspective, this matters. Directors are habituated to deal with regulatory precariousness; having said that, AI adds to the intricacies since systems have already gone live, adaptive and usually inculcated in business processes in depth. Waiting for perfect clarity is not an option. At the same time, over-engineering governance structures too early risks locking firms into approaches that may soon need revision.

 

From Cyber and ESG to Digital Governance: A Familiar Pattern

Boards have seen this movie before.


Over the past two decades, several specialised topics have followed a similar trajectory into the boardroom: 

  • Financial controls became a board-level issue after accounting scandals, leading to explicit audit committee responsibility for internal control over financial reporting. 

  • Enterprise risk management gained prominence after the global financial crisis, with many large firms establishing dedicated risk committees. 

  • Climate and ESG oversight moved from investor-relations talking points to formal board responsibilities, reinforced by frameworks such as the Task Force on Climate-related Financial Disclosures and later European sustainability reporting rules. 


In each case, a technical management topic attracted regulatory scrutiny, investor pressure and eventually explicit board oversight. Digital governance is following the same path. It has now emerged as a fourth cross-cutting domain of board responsibility—alongside audit, risk and ESG.


Importantly, digital governance is not just another name for cybersecurity. While cyber resilience remains a critical component, digital governance also covers artificial intelligence, data governance, algorithmic decision-making and the reliability of critical digital systems. These issues intersect with operational resilience, compliance and ethics but they are not reducible to any single function.

 

Oversight Versus Management—and Why AI Complicates the Divide

Corporate law has long drawn a line between oversight and management. Directors are expected to supervise systems and controls, not to run operations. In the United States, the duty of oversight articulated in the Caremark line of cases requires boards to make a good-faith effort to implement and monitor systems of reporting and control. A sustained failure to install any system capable of surfacing critical risks can expose directors to liability.


Although Caremark is a U.S. doctrine, its underlying logic has influenced thinking about board responsibilities in Europe and elsewhere. Boards are not guarantors of outcomes but they are expected to ensure that governance systems exist and function.


Artificial intelligence strains this model in several ways.

First, AI systems are often embedded invisibly within everyday processes. A failure may not look like a discrete “AI incident” but rather a compliance breach, discrimination claim or safety issue that crosses functional silos.


Second, many AI systems—particularly machine-learning models—are difficult to explain even for technically sophisticated executives. This creates discomfort for directors who must show that they were reasonably informed when relying on management assurances.


The instinctive response is to pull technical decisions upward: to ask for deeper dives into models, data sets or training techniques. But that response misunderstands the board’s role. The law does not expect directors to approve individual algorithms. What it expects is oversight of the systems by which models are designed, tested, deployed, monitored and retired—and of how responsibility is allocated when things go wrong.


The AI Act reinforces this distinction. It specifies governance expectations without turning boards into engineering committees. 

 

What the AI Act Adds to Board Expectations

The AI Act does not rewrite directors’ duties but it sharpens what regulators consider reasonable governance.


For high-risk systems, providers and deployers must implement:

  • Risk-management processes

  • Data and data-governance practices

  • Technical documentation

  • Human oversight measures

  • Post-market monitoring and incident reporting


Providers of general-purpose models face separate transparency and governance obligations. When these requirements are read together—and alongside existing regimes on data protection, product safety and digital resilience—they effectively define the minimum architecture of a compliant internal governance system.


For boards, several implications follow.

First, companies must classify their AI systems against the Act’s risk tiers and maintain an inventory tied to business functions. This is not a purely technical exercise; it shapes reporting, accountability and escalation.


Second, monitoring does not stop at deployment. Systems must be supervised in use, with mechanisms to detect drift, bias, misuse and unintended impacts.


Third, incident reporting under the AI Act should integrate with existing incident-management, whistleblowing and regulatory-engagement processes. Creating parallel “AI-only” channels risks fragmentation and confusion.


In short, the Act tells boards what must exist, even if it leaves flexibility on how firms organise it.

 

A Minimal Digital-Governance Architecture for Boards

For a listed company with meaningful exposure to the AI Act, a defensible board-level digital-governance framework does not need to be elaborate. It does, however, need to be explicit. Four elements are particularly important.


1. Clear Committee Ownership

Boards must decide where digital governance sits. 

In some organisations, the audit committee is the natural home because internal controls, documentation and assurance are central. In others, a risk committee is better positioned because AI-related risks are part of a broader enterprise risk landscape. Highly digital businesses may opt for a dedicated technology or digital committee.


The specific structure matters less than clarity. At least one committee should explicitly state that artificial intelligence, data and critical digital systems fall within its remit—and explain how that remit connects to cyber security and ESG oversight. Silence or ambiguity is increasingly hard to defend.


2. Executive Accountability That Is Visible

On the management side, firms are experimenting with roles such as Chief AI Officer or Chief Data Officer. From a board perspective, titles are less important than accountability.


Directors should be able to answer three simple questions:

  • Is there a senior executive clearly accountable for the AI and data-governance framework as a whole? 

  • How is that responsibility integrated with risk, compliance, internal audit, information security and data protection? 

  • Does this executive report regularly to the board or committee, rather than appearing only when a new initiative needs approval? 


Regular reporting signals that AI governance is operational, not aspirational.


3. A Coherent Information Architecture

Boards cannot—and should not—track every model. But they do need a consistent, structured view of exposure, controls and incidents.


A practical digital-governance dashboard might include three categories of indicators: 

  • Exposure indicators: the number of AI systems in use, mapped by business function and risk tier under the AI Act. 

  • Control indicators: progress against recognised frameworks such as the NIST Artificial Intelligence Risk Management Framework and its Generative AI Profile or adoption of governance standards like ISO IEC 42001, the first international AI management-system standard. 

  • Incident indicators: the number and nature of significant AI-related events, including how lessons learned have been translated into changes in controls. 


What matters is not perfection but consistency. Directors should see the same categories of information over time, allowing them to spot trends and gaps.


4. Defined Escalation Thresholds

Finally, boards should expect management to define what counts as a digital incident that must be escalated.


Examples might include:

  • Any AI-related event triggering notification under the AI Act 

  • Incidents with clear implications for safety or fundamental rights 

  • Major regulatory investigations or information requests focused on AI systems 

The board does not need to micromanage thresholds but it should test whether they exist, are understood in practice and align with the company’s risk appetite and legal obligations. 

 

Avoiding the Two Governance Extremes

As digital governance gains prominence, boards risk drifting toward one of two unhelpful extremes.


At one extreme, directors attempt to interrogate the internal mechanics of individual models. This rarely improves oversight and blurs the boundary with management.


At the other extreme, boards receive polished presentations on AI strategy, innovation and ethics—without concrete information about inventories, incidents or control weaknesses.


A sustainable approach treats digital governance the way mature boards treat financial reporting or climate risk. Directors focus on systems, accountability and assurance. They ask whether internal audit reviews AI controls. They seek external perspectives on how their governance compares with peers. They ensure that significant incidents lead to visible changes in governance, not just lessons-learned slides.

 

Looking Ahead: Digital Governance as a Settled Board Responsibility

By the time high-risk obligations under the AI Act are fully in force, investors, regulators and courts are likely to treat digital governance as a settled part of board work. Boards will not be judged on whether they understood every algorithm but on whether they put in place reasonable systems to oversee them.


Those that succeed will not have turned directors into technologists. They will have done what boards have always done best: applying familiar governance principles—clarity of responsibility, reliable information, escalation and assurance—to a new and rapidly evolving domain.


In that sense, the AI Act is less a revolution than a catalyst. It accelerates a shift that was already underway and makes explicit what good boards were beginning to do anyway. Digital governance is no longer optional, experimental or delegable. It is now part of the core craft of the modern board.


Our Directors’ Institute - World Council of Directors can help you accelerate your board journey by training you on your roles and responsibilities to be carried out efficiently, helping you make a significant contribution to the board and raise corporate governance standards within the organisation.

Comments


  • alt.text.label.LinkedIn
  • alt.text.label.Facebook
bottom of page