Boardrooms are still confronted with apparently endless issues in the areas of digital systems risk governance, disclosure, regulations, and law. Artificial intelligence (AI), a digital technology that society is only now starting to grasp and deal with, is becoming more and more popular, which exacerbates this.
Businesses need to use artificial intelligence (AI) as another digital tool to stay competitive, and it's more potent than most. Rapid technological advancements have allowed digital tools to move from being used for specialised information technology (IT) tasks to becoming the major nervous systems that regulate the most important systems and assets across all public and private economic sectors.
Applications of extremely advanced AI raise the possibility of cyberattacks by outside threat actors. Furthermore, they provide novel, intricate hazards that might potentially have greater consequences. A few of the numerous instances include the introduction of prejudices, inadvertent breaking of rules and laws, data espionage, and poor decision-making. Cyber danger and AI's increasing complexity and persistence are frightening. As they respond to growing calls for more control of digital systems, boards are playing the defensive position.
Furthermore, the technical intricacy of risks linked to digital tools is expanding the governance divide between risk managers and the board. Digital risk is different from regular business risk. While risk resources frequently use defensive tactics like expanded disclosures, risk assessments, and compliance, while these are crucial, they do not, by themselves, qualify as acceptable governance. Boards should insist that findings from these procedures be conveyed in a commercial context that is not characterised by the technical jargon used.
Nevertheless, despite this shortcoming, board members frequently take false solace and agree that these actions satisfy their governing duties. Boards must instead provide a stronger framework for digital risk. To do this, it is necessary to comprehend the systems that need to be controlled and to create frameworks, rules, and processes for managing digital risks. This can only be accomplished by making organisational, educational, and cultural adjustments inside your company.

An Evolving Risk Landscape: Defining a Response
Until a few years ago, much of the discussion around AI risk was limited to issues with the development and operation of AI models. This was primarily driven by the field of AI ethics, which takes a principle-based approach and addresses concerns like fairness, bias, explainability, robustness, and human centricity in the context of how AI systems make decisions or produce outputs.
However, the above-mentioned ecosystem shift, along with AI being used more in business processes, AI supply chains getting more complicated, and changing regulatory responses, means that we need to look at potential risks through a broader lens and take a more comprehensive approach to data and privacy issues that arise from AI use that goes beyond model assessments.
India's Leading Position in Generative AI Adoption
According to "The Elastic Generative AI Report: One Year On," India is leading the way in worldwide adoption of Generative AI (GenAI) technology, with an impressive 81% of organisations already deploying these disruptive solutions. Despite this excitement, difficulties such as data management, security concerns, and accessibility remain, highlighting the importance of strategic relationships with GenAI providers. Indian firms expect considerable productivity improvements from GenAI-enabled conversational data search capabilities, perhaps saving two or more days per week per employee. Furthermore, the analysis indicates a positive trend of higher budget allocations to GenAI programmes in the coming years, demonstrating India's willingness to invest in and utilise GenAI's potential for innovation and competitive advantage in the global market.
To get a fuller picture of the AI risks they face and figure out how to govern them, businesses should think about three main areas: their overall AI strategy and how they plan to use AI; the rules and regulations that affect their activities and how they use AI; and the expectations for responsible AI use from both internal and external stakeholders.
1. Company AI Approaches And Strategy
Determining risks and priorities for remediation and governance priority areas requires an understanding of the overall AI strategy and the deployment approach.
One of the first things to think about when adopting AI will be the combination of "buy and build" methods.
Businesses using a "buy" strategy—that is, acquiring pre-existing AI services or models—will prioritise managing risks from third parties and those that may arise during the deployment of AI technology. Some things to think about are the provider's controls and compliance with the AI system's development, the type of relationships with suppliers and how data flows between them, the rights and obligations that come with deploying models and processing data, the governance of AI system deployment, and the resilience of the business given its reliance on outside suppliers.
To use the "build" strategy, in which businesses create their own AI models and systems, organisations must assume primary accountability for the system's development. Compliant training data collection and usage, as well as testing during development cycles for features like robustness, security, privacy, and discrimination, will be additional factors to take into account in this situation. There will also be inquiries about addressing third-party rights concerning data used for models, including individual privacy demands, and managing risks associated with the usage of models downstream.
The infrastructure that will enable AI systems is a factor in the buy vs build debate. What data sources or assets will be connected to AI systems, how and where will they be housed, what will be the interfaces for interactions, if the hardware will be used, etc?
The approach to data utilisation is another topic to think about regarding the overall risk-relevant AI strategy. This can be further divided into consideration of data relevant at various stages of the AI system lifecycle: data sources for inputs (the data that triggers an AI system to produce a result), data arising from outputs (the result of an AI system's output), and data sources for training (the data used to develop and train the AI system's model).
The broad categories of data that will or won't be available for processing by AI systems at different stages of the AI system lifecycle can be taken into consideration. These include master data on important business products and structures, transactional data on purchases, sales, logistics, and resourcing, unstructured data in non-defined formats throughout an organisation, or third-party data that is collected and managed by providers who don't have a direct relationship with the end user.
These are undoubtedly high-level viewpoints, and the adoption of AI in practice involves a variety of subtleties, approaches, and extra factors and complications, like the usage of open-source models. However, developing a comprehensive grasp of the organization's intended strategy and a shared perspective on advantages and risks at a strategic level can help level the playing field and establish a common goal between risk management and commercial usage.
Examining the overall AI strategy can also help to position AI governance programmes for success by showing how they can support organisational objectives and address important risk areas in a way that is comprehensible to those in charge of these strategies and interest alignment. Teams in charge of AI governance may then be able to demonstrate how their solutions also clearly benefit the organisation as a whole, by, for example, increasing data accuracy for improved decision-making, assisting in the acceleration of adoption through the use of practical barriers, and simplifying contracts with clients and partners.
2. Advances in Regulation and Law
The second major area to think about is the risks and duties associated with (harder and softer) compliance that come from laws, regulations, and standards. Regarding AI strategies, it is possible to begin identifying priorities and methodically focus areas by taking into account the company's AI strategies and deployment activities, which were previously discussed, the current application of regulations to company activities, and the horizon scanning of new and emerging regulations.
Company executives will need to think about these problems from a broad standpoint as well as about the regions, industries, and sectors in which they operate. This section outlines some recent changes to the laws about AI, data regulations, and other legal matters.
a. Existing Data Rules.
Existing privacy and data protection standards are likely to apply when personal data is involved or when an AI model is used to advise individual decisions. However, while many privacy principles are well established and data protection authorities have published guidelines on data protection and AI usage, there are numerous areas of ambiguity that companies must negotiate and continue to monitor.
For example, in January 2024, the ICO issued a consultation on how parts of data protection legislation should be applied to the creation (and usage) of generative AI models. The consultation brings up questions about what legitimate interest is in using data to train generative AI and how to make sure that the data accuracy principle, data subject rights, and purpose limitation principle are followed when developing and using generative AI.
b. Newly Emerging Lawsuits
It is hardly unexpected that while regulators and litigants scramble to establish a sufficient legal framework, we are already witnessing an increase in litigation about AI. Leading AI development businesses in the IT sector have been hit with class action lawsuits in the US for allegedly violating data privacy rules by using personal data for model training. They have also been involved in several intellectual property challenges about rights in data used to train AI systems. Even if some lawsuits accusing the collection of personal data have been rejected, disagreements about how privacy regulations, intellectual property, and the creation (and training) of AI models interact will likely persist.
c. Laws Particular to AI
Currently, India lacks a legislative framework that governs the development and application of AI and ML tools/technologies. This industry is expected to be controlled by the Digital India Act, which might be issued for public input by July 2024. This rule is anticipated to accelerate AI research by safeguarding innovation in AI, machine learning, and other new technologies. The Indian government has said that, while it would promote the monetisation of AI/ML technology in India, this process must be governed by particular compliances for high-risk use cases, such as human intervention and oversight, as well as ethical usage of AI/ML tools and technology.
Meanwhile, the Ministry of Electronics and Information Technology ("MeitY") has issued advisories to 'intermediaries' and 'platforms' that develop and make available AI tools and/or technologies to Indian users, asking them to comply with additional requirements specific to AI tools, as part of the due diligence obligations imposed upon such 'intermediaries' under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. While these advisory opinions lack a formal basis, it appears that the business sector is collaborating with the government to resolve their concerns, to the degree possible.
3. Stakeholder Expectations and Company Values.
A third key aspect of designing AI governance responses is for organisations to address the expectations of relevant stakeholders as well as align with company values and ethical positions. Understanding the expectations of consumers, shareholders, business partners, counterparties, workers, and other key stakeholders may help define the emphasis areas and goals for AI governance initiatives. Different stakeholder groups will have varying degrees of relevance and emphasis for different firms in different locations and industries.
This may influence the kind and severity of issues and hazards handled, as well as how governance and accountability are displayed. For example, consumer-facing organisations may emphasize explainability at a lower technical level than business-to-business organisations, and where consumer-facing organisations may want to focus more on external communication of their AI governance efforts, business-to-business organisations may instead want to make sure they have collateral in place to respond to customer risk assessments or to demonstrate adherence to industry standards to meet procurement requirements.
Similarly, AI governance responses should take into account the company's values and intended position on responsible AI use. ESG programmes, sustainability reporting, internal or public commitments to values, codes of conduct and other internal policies that outline the guiding principles that govern how an organisation conducts itself can all contribute to a more comprehensive framework for defining and driving adherence to company AI governance stances.
Given the rapid pace of development, regulatory uncertainty, and the combination of emerging risks and AI's ability to amplify effects through speed and scale, being able to tie controls and standards to broader concepts of risk and governance objectives can help anticipate and avoid future issues that would otherwise go unnoticed.
Developing a Response
Privacy and overall data risk will be critical considerations for firms using AI strategies and systems. Adopting a programmatic approach and making a list of the effects on data usage and privacy of the organization's main AI methods can help in coming up with a practical, useful, and long-lasting solution that fits the company's planned AI activities, risk exposure, and risk appetite.
To accomplish this, privacy and data risk domain experts must be aware of and contribute to wider organisational AI strategies, ranging from model development, deployment, and infrastructure to training and skill development, as well as take a strategic approach to AI governance.
Defining a target state for AI governance as well as a desired level of maturity over time will be beneficial to this strategy. Identifying the concepts and patterns that underpin regulatory changes and utilising these to drive overall actions may also aid in predicting future needs and 'baking in' responsible AI culture and controls to assure continuing compliance and simplified AI integration.
While keeping a long-term perspective, companies should seek fast wins that can quickly enhance risk posture in areas such as:
1. Setting up gateways and checkpoints in current processes to initiate reviews and advisory support for AI systems.
2. Putting in place high-level rules, guidelines, and decision frameworks for the use of AI in the company.
3. Adding AI-specific components to current assessment procedures, such as vendor risk evaluations and privacy impact assessments.
4. Recruiting volunteers to help with risk management. With the current interest in AI adoption, now may be a good moment to take advantage of that passion and interest by appointing 'AI governance champions' within the organisation to nurture knowledge, responsibility, and ownership.
Organisations should also analyse their counterparts' activity and strive to move with the market. As AI governance techniques evolve, there appears to be an increasing consensus on the following corporate approaches and considerations.
Conducting risk evaluations at the use-case level rather than evaluating the entire technology.
Creating an inventory of AI use within the company to better understand activities and risk exposure.
Implementing an overall policy framework and a multi-functional governance architecture to bring in the diverse subject skills required to manage the numerous risks associated with AI deployment.
Looking to use and modify current frameworks. For example, there appears to be a growing consensus on using the National Institute of Standards and Technology's (NIST) AI Risk Management Framework as a starting point for AI governance projects, notably in the United States.
Organizations frequently struggle to translate AI governance concepts and compliance duties into technological needs. Making it a collaborative process and offering clarity on intended objectives to technical teams, who are best positioned to suggest relevant metrics, can help prevent this problem.
Conclusion
In conclusion, as artificial intelligence (AI) continues to evolve and permeate across industries, the imperative for robust governance frameworks becomes increasingly evident. Organizations must navigate a complex landscape of digital risks, compliance obligations, and stakeholder expectations. Businesses can reduce risks and use the transformative power of AI technologies by taking a strategic and proactive approach to AI governance that includes thorough risk assessments, following the rules, and being in line with their values. Continuous adaptation and collaboration across diverse stakeholders will be key to ensuring sustainable and ethical AI deployment in the years ahead.
Our Directors’ Institute- World Council of Directors can help you accelerate your board journey by training you on your roles and responsibilities to be carried out efficiently, helping you make a significant contribution to the board and raise corporate governance standards within the organization.
Komentar