Can We Prevent a Catastrophic AI Scenario? The Limits of Corporate Governance
top of page
Men in Suits

Can We Prevent a Catastrophic AI Scenario? The Limits of Corporate Governance

Experts, policymakers, and international leaders are concerned that the rapid breakthroughs in artificial intelligence (AI) could offer alignment problems and catastrophic risks. Although various risks have been outlined individually, there is an urgent need for a systematic discussion and illustration of potential hazards to better inform mitigation efforts. This blog categorises catastrophic AI risks into four categories: malicious use, AI race, organisational risks and rogue AIs. Malicious use refers to the intentional use of AIs to cause harm, while AI race refers to competitive environments that force actors to deploy unsafe AIs or cede control to AIs. Organisational risks highlight how human factors and complex systems can increase the likelihood of catastrophic accidents.


The rapid progress of artificial intelligence (AI) in recent years has sparked concerns among experts, policymakers, and global leaders regarding the potential dangers associated with advanced AIs. Like any powerful technology, AI demands responsible handling to mitigate risks and leverage its potential for societal advancement. Yet, easily accessible information is scarce on the catastrophic or existential risks posed by AI and how they could be mitigated. Existing sources on this topic are often fragmented across different papers, tailored to specific audiences, or focused on particular risks.

corporate governance

Does research on AI safety help to clarify long-standing issues with corporate governance? 

And might the economics and law of Corporate Governance assist us in defining the new issues surrounding AI safety? We find that the corporate unrest at OpenAI has brought to light five important lessons about the corporate governance of AI, along with one very serious warning.


1. Traditional corporate governance is insufficient for companies to safeguard the public interest.

When OpenAI and Anthropic, two of the most prominent actors in AI development, were established, this is, at least, what they thought. OpenAI, Inc., a charitable organisation, is in charge of OpenAI Global LLC, the Delaware corporation that Microsoft and other investors have spent billions of dollars in.


In contrast to traditional business, neither the CEO nor the investors have the authority to appoint or remove board members. Investors are cautioned in the business charter that OpenAI's primary fiduciary obligation is to mankind and that the organisation's objective is "to ensure that artificial general intelligence (AGI) benefits all of humanity." Stated differently, this obligation supersedes all other obligations to turn a profit.


With the express goal of "responsibly developing and maintaining advanced AI for the long-term benefit of humanity," Anthropic is set up as a public benefit corporation (PBC). A significant modification to the basic PBC structure allows a common law trust with the same social objective as the business to elect a growing number of directors over time. These directors will reach a majority after a predetermined amount of time or when specific fundraising milestones are met.


Both organisational models are atypical for innovative technology organisations. Their goal is to limit the CEO's control and shield corporate governance from the demands of profit maximisation. Investors and executives can voice their disapproval if the company prioritises safety over profits, but they are powerless to force the board to act differently.


This strategy should be compared and contrasted with the current surge of support for stakeholder governance. A significant group of CEOs from top businesses, the Business Roundtable, released a statement in 2019 in which many of its members committed to providing value to society as a whole, as well as to shareholders, customers, and employees. The World Economic Forum, corporate governance specialists, and significant asset managers have all released similar stakeholder governance manifestos that emphasise how important it is for businesses to take social objectives into account in addition to profit maximisation.


On the other hand, OpenAI and Anthropic's governance systems imply otherwise. Regardless of one's opinion regarding the decision to fire Altman or the efficacy of Anthropic's or OpenAI's governance structures, these governance experiments provide us with a valuable lesson: a company cannot rely solely on traditional corporate governance if it is serious about social purpose and stakeholder welfare. Instead, it must limit the power of executives and investors.


2. The commercial drive is too powerful for even the most inventive governing mechanisms to subdue.

The recent unrest within OpenAI's leadership is evidence that even cutting back on the pursuit of profits will be difficult.


In a dramatic turn of events on November 17, the board of directors ousted co-founder and CEO Sam Altman, citing his lack of transparency with the board. Despite objections from investors, the board stood firm in its decision. Shortly thereafter, Microsoft announced the hiring of Altman and another co-founder, Greg Brockman, to continue their AI development work within Microsoft. This move prompted hundreds of OpenAI employees to consider joining Microsoft as well. However, within a week of his removal, Altman was reinstated as CEO of OpenAI, leading to the resignation of all but one director.


While Microsoft couldn't directly intervene in OpenAI's governance, it effectively acquired Altman and numerous employees, essentially "buying" OpenAI's expertise without compensating its shareholders. This situation resembles the idea of "amoral drift" that economists Oliver Hart and Luigi Zingales described, where profit-driven motives can overshadow a firm's social mission even if the board initially had nonfinancial reasons for its actions.


The possibility of preventing employees from transitioning to for-profit entities like Microsoft raises questions about the effectiveness of governance measures. Would Microsoft have invested such a significant amount in OpenAI if the company remained solely focused on its social goals? Would investors be inclined to support AI startups with stronger commitments to social responsibility over profit maximisation? These hypothetical scenarios suggest that companies prioritising investor interests may gain favour over their socially-oriented counterparts.


While solutions to prevent amoral drift are conceivable, none have proven foolproof thus far. As a result, the dominance of investor-friendly AI companies over socially-committed ones may persist unless robust safeguards are developed.


3. There isn't always a convergence between independence and social duty.

The so-called "orthogonality thesis," which holds that AI's intelligence and its ultimate objectives are not always connected, is a key idea in AI safety. Both super-intelligent and dumb machines that are harmful to mankind are possible. Being intelligent is not a guarantee against bad behaviour.


Experts in corporate governance ought to take note of this useful idea. Companies are required by textbook corporate governance to designate independent directors, who are believed to be more unbiased towards shareholders and less susceptible to the sway of CEOs. However, shareholder loyalty and independence from management are opposed; the former does not always entail the latter. An independent director can ignore issues, act in their self-interest, or have personal beliefs that are detrimental to the interests of shareholders. It is not reasonable to assume that independent directors will act morally.


By the same token, we cannot assume that socially good actions will happen on their own when CEO and investor pressure is removed, as is the case with OpenAI and Anthropic. It is less likely for directors who are unfireable by investors to heed investor preferences, but is it more likely that they will make the right decision for society as a whole?


Governance systems that prioritise social responsibility shouldn't settle for mere autonomy from investors and CEOs. Additionally, they must put in place systems that motivate directors to pursue and consider the social aim. Corporate planners ought to test strategies that enable external review of board decisions, provide incentives for socially conscious decision-making, and provide inventive ways to hold board members accountable.


4. The goal of Corporate Governance should be to find a way to balance profit and safety.

The so-called "alignment problem" is a significant issue in AI safety because superintelligent AI may have values and objectives at odds with those of humans. Although this sounds like a science-fiction dream, AI experts agree that the alignment issue is real and that human-level AI will soon be achieved.


Although we can programme a superintelligent AI to aim for socially desirable goals, we cannot rule out the possibility that the AI will choose to pursue detrimental instrumental purposes in the process of obtaining those terminal goals. The issue is that we currently don't know how to train artificial intelligence to act in a way that is consistently consistent with human ideals. We can enumerate hundreds or even thousands of behaviours that are acceptable to humans, but this list will never be complete.


The primary issue with corporate governance is quite similar to the alignment challenge with AI. Investors seek to ensure that corporate managers act in their best interests as they give their money to them in a firm. Like AI programmers, investors can set some guidelines, but they are unable to specify every rule that could be applicable in every scenario. Economists like to refer to the agreement with the managers as an imperfect contract.


Corporate Governance makes a valiant effort to address this issue. Businesses provide incentives to managers that match their goals with those of investors, such as stock options. They name directors who are not affiliated. They make important information public so that investors may keep an eye on how businesses are being managed. Voting rights and other control mechanisms are granted to investors, enabling them to remove dishonest managers and intervene as needed.


The managerial "agency problem," or how to lower the danger that managers may stray from investor preferences, is the focus of the entire apparatus of corporate law and governance. While it doesn't completely resolve the issue, it does significantly lessen it.


The alternative governance frameworks developed by OpenAI and Anthropic aim to shield AI security from managers' and investors' profit-driven mindsets. However, as we've seen, the desire for profit is a strong force that can interfere with well-thought-out governance structures.


Attempting to monetize AI safety is an alternate path. If private control of AI safety is to be achieved at all, forming a partnership with the commercial motivation is the best chance for success.


Although it may be just as difficult to resolve as the alignment of AI and human values, the alignment of profit and safety offers the greatest potential benefits when it comes to the corporate governance of AI. This initiative should be the subject of more imaginative trials.


5. The boards of AI businesses have to strike a careful balance when it comes to cognitive distance.

AI safety is a specialised area. Even though a lot of entrepreneurs are now aware of some of the problems associated with AI, the true experts are frequently outsiders who have little to no background in the corporate sector.


More crucially, mainstream businesses and experts in AI safety frequently have quite different backgrounds, levels of expertise, and perspectives on the potential dangers and rate of advancement of AI. For many AI safety experts, what is a small but real risk—like an uncontrollable AI—is a ridiculous sci-fi fantasy, while what is a highly probable and imminent development—like human-level or superintelligent AI—is a wild speculation to many outsiders.


Scholars have used the term "cognitive distance" to describe the distinction in interpretation and understanding of the environment between experts in AI safety and non-experts. For group decision-making, cognitive distance may be advantageous, particularly in creative businesses. Decision-makers need to be exposed to fresh concepts and viewpoints to generate innovative information.


However, determining the ideal level of cognitive distancing is challenging. An excessive amount of cognitive distance can hinder genuine cooperation and mutual understanding, while an insufficient amount might lead to echo chambers and groupthink.


Was there insufficient cognitive distance involved in the abrupt and dramatic decision to terminate Sam Altman, which came with little to no notice to significant investors and no public explanations? In addition to Altman and Brockman, Open AI's board members included a tech CEO, a scientist from RAND Corporation specialising in AI governance, the company's top scientist, and a specialist in academic AI safety. Their opinions on AI safety were probably so similar that opposing viewpoints from outside the group had little influence on their decision-making. They probably didn't need to persuade anyone outside of the company that dismissing the CEO was the appropriate decision.


But if board members lack a strong safety attitude or instinctively reject the worst-case scenarios, can AI businesses effectively achieve their social mission? The composition of OpenAI's interim board now more closely resembles the corporate sector. It includes big tech veteran Bret Taylor and former Treasury Secretary and Harvard President Larry Summers, but it does not contain any AI "geeks."


The shared beliefs on the new board may have only gained greater traction while the degree of cognitive distance has stayed constant. Stated differently, the previous board may have been unified in its acceptance of the risks associated with AI at the human level, but the new board may be equally unified in its adoption of a more traditional commercial perspective. The boardroom's cognitive distance may be too close in both configurations.


Business boards are intricate social structures. Directors with varying backgrounds, expertise, and points of view should debate passionately and intelligently in the boardroom, eager to offer their insights and, when necessary, adjust their opinions. This is the ideal dynamic for decision-making. Boardrooms in real life frequently fall short of this ideal.


A primary concern for AI firms should be board composition, given the enormous risks related to AI safety and the wide range of perspectives and levels of competence present. These organisations should actively encourage time commitment and lively, unbiased debate in boardrooms, and they should aim for higher cognitive distance than more traditional organisations. The social and cognitive dynamics in the boardroom are vital, even though they are sometimes overlooked in discussions about corporate governance. AI development is without a doubt the industry where this should take precedence in the commercial world.

 

Conclusion

Corporate governance may effectively address many risks associated with AI, such as job displacement and privacy concerns, but it falls short in handling existential threats posed by AI, according to many AI experts.


A survey conducted in 2022 among AI experts revealed that nearly half of respondents believe there is at least a 10% chance of AI leading to catastrophic outcomes, such as human extinction. While corporate decision-makers may prioritise the common good, corporate governance mechanisms are ill-equipped to handle such extreme risks, primarily due to the inherent limitations of incomplete contracts.


Incomplete contracts, which lack rules for all possible future scenarios, are a common challenge in business transactions. When the costs of incompleteness are deemed too high, firms may opt to integrate their contractual counterparts into their own organisations to retain control over relevant assets and regulate unexpected situations.


In the context of AI safety, the ability to control the AI system—such as turning it off—is crucial for mitigating risks. However, if the AI becomes uncontrollable, as depicted in many sci-fi narratives, ordinary legal controls like property rights and contracts become ineffective. Instead, extraordinary legal controls akin to those used for regulating nuclear proliferation or biohazards are necessary.


While innovative corporate governance measures may assist in the short term, they cannot replace the essential role of public governance in addressing catastrophic AI risks. Therefore, governments must recognise their responsibility to ensure AI safety and take decisive action to safeguard humanity's future.


Our Directors’ Institute- World Council of Directors can help you accelerate your board journey by training you on your roles and responsibilities to be carried out in an efficient manner helping you to make a significant contribution to the board and raise corporate governance standards within the organization.




9 views0 comments
  • alt.text.label.LinkedIn
  • alt.text.label.Facebook
bottom of page