Open Nav

Demystifying AI Governance

Demystifying ai governance1
Table of Contents

Introduction

Artificial Intelligence (AI) are machines that simulate human intelligence, such as the ability to reason, discover meaning, generalize, or learn from experience. With technological advances in computer speed, memory capacity, and data availability, the adoption of AI in various industries and fields is widespread globally. These numerous applications of AI have, in effect, transformed business processes and solved complex problems. The fourth industrial revolution is a wave-powered by AI-driven products and services surrounding our daily lives. As Sundar Pichai, CEO of Google, said, “The last ten years have been about building a world that is mobile-first. In the next ten years, we will shift to a world that is AI-first.” This quote iterates that AI is the mantra of the current era. Big organizations highlight their enthusiasm to apply AI to their core business processes. 

The core feature of AI-driven products and services is to take action by the rationale that gives the best outcome to achieve a specific goal. The extensive AI coverage and utilization draw the question of how it affects societal structures. The beneficial impact can not be over-emphasized as these technologies have improved our everyday life and organization outputs. On the other hand, AI applications can be biased, leading to discrimination that undermines equal opportunity and amplifies oppression. The underlying prejudice is traceable to the potentially partial data used in training the model. An excellent example of these social consequences highlighted by Reuters’ reported that Amazon’s recruiting engine was gender-biased in rating prospective candidates from various inputted CVs. In this case, that bias was the tech world’s male-dominated working environment, which penalized female applicants. AI models tend to perpetuate existing biases when learning from historical data. Thus, there is a need for underlying legal bases and policy frameworks to implement fair AI solutions. Another notable case is the Cambridge Analytica massive data breach; according to Guardian, the company harvested millions of American Facebook users’ data without their permission and deducted sensitive information. Personality profiles created from the freely accessible Facebook were used to develop a psychological tool for the sole aim of political persuasions in the 2016 US election. These impartial implementations lead to the concept of AI Governance. 

What is AI governance?

According to Merriam-webster dictionary, “Governance” is the act or process of overseeing the control and direction of something. Governance is how rules and actions are regulated, maintained, and structured – and measures of accountability. AI Governance is about evaluating and monitoring AI models’ research and development while ensuring they are explainable, transparent, and ethical.  AI Governance is a sub-section in the Government by algorithm alongside blockchain. The scope of AI governance scope is beyond public policy as it is fully applicable in businesses’ operations. Addressing governance in these two different scopes tends to have different explainable, transparent, and ethical definitions. Owing to the vast implementation of AI across diverse sectors such as healthcare, business, banking, finance, education, manufacturing, and so much more, the need to outline AI governance is pertinent. This need drives the development of legal frameworks for answering questions surrounding AI safety, appropriate applications of AI automation, data privacy, handling vulnerabilities, compliance, auditability, bias and risk monitoring, and discriminatory evaluation. 

Importance of AI Governance

Speed to market AI-powered products and services has led to the failure of AI governance considerations. However, there is a growing realization that the failure to address AI governance capabilities adequately can have far-reaching consequences. The vast application of AI-driven solutions accentuates both the benefits and risks—highlighting the issue of trust—this existential threat of distrust in AI solutions anchors on integrity, explainability, fairness, and resilience. AI governance sets the framework to conduct operations to foster trust among key stakeholders, including customers. Both businesses and governments realize that successful and sustainable AI regulation depends on partnership and collaboration, leading to proactive planning and operationalizing trust, accountability, and transparency in their system rather than waiting for enforced requirements. This makes the point that AI governance needs a hands-on approach to handling operations and resources. 

Who is responsible for AI Governance?

As earlier stated, AI governance conversation centers on public policies (government-oriented) and business operations. Assigning responsibility for AI governance is crucial as it aids effective accountability. Discussion about AI Governance and societal contexts have been conducted on various levels. UN, UNESCO, EU, and various first-world countries have proposed AI strategies and government documents. Top companies are working actively in the research and development of responsible AI products and services and good governance. There is a need for accountability and metrics to measure them to execute various AI governance policies or strategies. Various governments are becoming involved in AI governance by developing public policies, AI priorities, legal requirements, and audit firms. It becomes a third-party entity engaged in the continuous review of AI governance. Not all businesses need to take action on all fronts in developing various AI governance strategies as smaller companies may not influence prominent vendors or regulatory groups. However, companies regarding size have started planning and implementing AI governance as they expand AI projects from pilots to production, focusing on data quality, algorithmic performance, compliance, and ethics.

Measuring AI Governance

Measuring AI governance is vital to have proper management, in addition to being able to assign responsibilities. Like Peter Drucker said, “You can’t manage what you don’t measure,” as well as “What gets measured gets improved.” These metrics are tailored to ensure robust oversight over an organization’s use of AI. There is no specific recommended measure of AI governance as various countries and organizations have adopted different metrics. These different metrics applied and utilized on multiple levels and industries center on the following:

  • Data: This covers the data quality assessment and lifecycle of data (data lineage).
  • Security: This engulfs model security and its usage.  
  • Cost/Value: This covers the value and cost of the project and its respective sub-processes.
  • Bias: Measurement and mitigation strategies for bias are required. 
  • Accountability: Coherent outline of responsibilities.
  • Audit: Facilitate environment to perform audits continuously and review workflows and systems periodically by in-house department or third parties entities.
  • Time: The field of AI changes quickly due to innovations and researches, understanding the impact of these strategies and systems.

How does AI ethics relate to AI governance?

Aforementioned that AI governance is about evaluating and monitoring AI models’ research and development while ensuring they are explainable, transparent, and ethical. So what is AI ethics? 

AI ethics is a set of moral principles and techniques to inform AI development and responsible use. AI solutions are increasingly shaping our society and everyday lives. It is significantly affecting our interactions with each other. AI ethics are guidelines that formally define the design, outcomes, uses, and roles of AI-driven products and services with ethical and sustainable decisions. The discernment between right and wrong is crucial as these AI-driven solutions are autonomous, rational, and intelligent. AI models make decisions that impact humanity, giving rise to negative repercussions if not ethically reviewed and analyzed.

To have AI adoption, various AI frameworks that aim at standardization and regulations need to be established. Ethical AI policies are required to address how to deal with ethical, social, and legal exposures. Companies may incorporate AI policies into existing and new workflows. Ai governance strategies require policies around the ethical use of AI and data. It specifies protected elements, accountability, and mitigation plans for model bias and discrimination, privacy and security programs, and legitimate cover of the entire life cycle of the AI product or service.

Demystifying ai governance2

How does MLOps relate to AI governance?

Operationalizing AI means managing the complete end-to-end lifecycle of an AI project. ML operations(MLOps) focuses on creating an automated pipeline for bringing machine learning models into production. Governance is the foundation of MLOps, as management is vital to the entire lifecycle of various AI projects. Governance sets benchmarks and regulations on AI projects to ensure it delivers its responsibilities to stakeholders, employees, the public, and the Government. These responsibilities include commercial, legal, and ethical obligations, which form the basis of fairness. AI governance in MLOps ensures consistency and mitigates risks as rules and controls are set on various machine learning models running in production. These rules and controls create a coherent management system that protects shareholders’ investments and increase efficiency by ensuring a suitable return on investment (ROI).

AI Governance sets the tone for working in an effective, profitable, and sustainable environment as AIOps/MLOps are checked to commercial, legal, ethical, and compliance requirements. These requirements include access control, traceability of model results, logs, testing and validation, data provenance, transparency, bias, performance management, and reproducibility. AI governance is translated into actions, processes, and metrics that guide an explainable, transparent, and ethical use of AI. These Governance initiatives’ activities, processes, and metrics in MLOps can be categorized into two strategies:

  1. Data governance: These are procedures to manage, utilize and protect data backed with roles, policies, standards, and metrics to ensure that operations align with desired objectives. This framework of appropriate use and management of data aids a better understanding of data lineage and mitigation of bias.
  2. Process governance: This focuses on an adequate process definition that ensures all governance considerations in the life cycle of machine learning projects are addressed and accurately documented. It also involves risk assessments of machine learning projects and matching the governance process to that risk level.

MLOps bolster the transparency and explainability of Machine learning operation via visualization and understanding of performance metrics. The monitoring of these metrics across the various life cycle of machine learning projects enables lucid ethical tradeoffs, detection bias, and mishaps.

AI adoption concerns lie on the issue of trust. Having strong AI governance in place drives AI projects to explainable, transparent, and ethical, thus making customers, employees, and regulatory bodies auspicious and assurance that appropriate measures are in place to measure fairness and integrity. A robust MLOps workflow and a comprehensive AI governance framework will set a solid foundation for accelerating adoption.

AI and Governance Frameworks

AI and governance framework ensures that AI technologies have legal and standard practices researched and developed to navigate trusted and fair AI adoption. The definition of various governance frameworks of AI depends on understanding what AI and AI governance mean in its multiple contexts. YANG Qingfeng, a professor at the Center for Applied Ethics and Fudan Development Institute of Fudan University, described three modes of governance based on the AI definition. These three modes are 

  • governance based on governmental bodies,
  •  governance based on technologies, and 
  • governance based on humanistic values. 

 

The first AI governance view focuses on governmental bodies that consider AI a tool for different bodies. AI is used by various bodies such as governments, companies, individuals, etc. The safety and reliability is the key to good, responsible and rational use. 

 

The second AI governance is based on human values as AI is seen as an embodiment of human values. AI needs to follow human values such as responsibility, safety, fairness, and trust. This AI perspective focuses on designing processes and planning strategies to guard or embed human values into the intelligent agent. This perspective emphasis on ethical frameworks and ethical policymakers/decisions makers.

 

The third AI governance focuses on the technologies or technological systems. This perspective covers philosophical problems, technical problems, and societal problems. In this view, AI governance aims to tackle these problems.

 

These respective modes of governance frameworks are based on two high-level guiding principles that promote trust in AI and understand the use of AI technologies:

  • Explainable, transparent, and fair decision-making: perfect explainability, transparency, and fairness are unfeasible. It is pertinent to ensure that the undertaking of AI applications on various levels and industries reflects the objectives of these principles as far as possible as it helps build trust and confidence in AI.
  • Building human-centric solutions: As the impact of AI can not be overemphasized as it amplifies human capabilities, the safeguarding of human being interests and values, including their well-being and safety, should be the primary considerations in the lifecycle of AI projects.

Logically, AI governance has experienced a transition from ‘use context’ to ‘prediction context.’ Most researches have focused on entities that use and design AI. Rational use or responsible use is the inevitable path. However, AI has substantial autonomy and the ability to learn. Algorithms are being used to predict human behavior in the future. The focus is on the relationship between AI, human beings, and social context.

Building AI governance frameworks mustn’t necessarily be afresh as sectoral regulation, legal codes, and established judicial processes for resolving disputes can be adapted to AI. Existing rules associated with product liability and negligence of products can be extended to cover AI systems. Existing governance structures are sufficient as starting points in addressing AI governance issues and instances. In rare cases where they are not,  governments should drive collaboration with other entities to help identify emerging risks and plan mitigation strategies in consultation with civil society.

Key areas for clarification

AI can transform industrial and societal processes; in the previous section, the governance frameworks were extensively covered. The government is a key stakeholder as it provides context-specific and comprehensive guidance for AI’s legal and ethical development and drives multi-stakeholders collaborative approaches to ensure good outcomes. AI priorities and socio-economical construct of various regions yield variations in the guidance, but high-level global standards as “due diligence” best practice processes within which development and deployment of AI are essential. Some of these key areas that require additional oversight and clarification are:

Explainability standards: Explainability explains how AI algorithms function and how decision-making occurs in model predictions. Explainability is a boosting factor of AI adoption in people’s understanding, confidence, and trust in the accuracy and appropriateness of its model predictions. It also ensures accountability as assessment of its algorithms, data, and design processes. Explainability that is meaningful will vary by the audience (because of complexity apprehension) and poses the question of what levels of explanation are acceptable. A detailed technical explanation may not necessarily be enlightening or helpful in practice. The explanation of how AI algorithms function is likely to be decipherable and appreciated than that of the description of the underlying mathematical equations of the Algorithm logic. The general audience may appreciate the AI systems inputs and outputs streams are fair and reason rather than a deep understanding of the mathematical calculations. Nevertheless, there are cases that it’s not reasonable to provide information on how algorithms work, like fraud detection and information security, as this might give adverse exposures. A reasonable compromise is required that balances the complex AI systems’ benefits against the practical constraints that different standards of explainability would impose. Appropriate standards of explanation should not exceed what is reasonably necessary and warranted.

 

Fairness appraisal: The fairness of AI algorithms focuses on the need for decisions to be free from unfair stereotypes, bias, and discrimination. Fairness has varying definitions that are context-specific, whether humans or machines make decisions. Fairness appraisal is about setting a clear fairness approach upfront to assist in decision-making when building AI solutions. The choice of fairness approach is context-specific and requires ethical reasoning that yields different equitable models. Policymakers need to set rules that influence the extent to which fairness is achieved and appraised. These rules will improve the consistency of decision-making and the fairness of decisions.

 

Safety considerations: This is about fostering good safety practices and assurance that systems are sufficiently reliable and secure—these safety considerations to take precautions against both accidental and deliberate misuse of AI with safety risks across technical, legal, economic, and cultural dimensions and improve confidence in their application. Safety considerations aid documentation of workflow and standards for safety inspections serving as due diligence for specific application contexts. Safety considerations should also take account of psychological factors and the risk of automation bias. The scope of safety considerations could be extended to establishing safety certificates by the collaboration of government and industry. These safety certificates would have requirements(safety checks) to be met to acquire relevant to specific use cases. To achieve sustainable safety considerations, it is essential to continuously evaluate the tradeoffs between the benefits of openness and the risk of abuse for specific advances. 

 

Human-AI collaboration: Humans are an integral component in the decision-making process. It suggests active oversight and involvement of humans in the automated system. These human involvements can be classified into three(3) approaches:

  • Human-in-the-loop(here, human retaining complete control and the AI only providing recommendations)
  • Human-out-of-the-loop( here, AI system has complete control without the option of human override)
  • Human-over-the-loop(here, human monitors/supervises with the ability to take over control when the AI model encounters unexpected or undesirable events).

Holistically people are central to an AI system’s lifecycle; governance in this context may involve identifying boundaries where human participation is deemed imperative (like life-altering decisions and legal judgments). Guidance on the extent to which people have the approval to switch off AI systems in production, consider backup options, and include discourse with stakeholders. “human in the loop” in AI implementations tends to support the belief of providing a fail-safe mechanism against mistakes. Still, it isn’t scalable to have humans check all recommendations and operations of AI systems. AI makes mistakes and humans; governance should provide a threshold of confidence for the various instances.

 

Liability frameworks: The point of who is responsible for AI governance has been earlier addressed. Government and organizations should remain responsible for decision-making and implementation of AI governance strategies as it is inappropriate for legal responsibilities to be shifted to intelligent machines. No matter how complex the AI system, it must be persons or organizations who are ultimately responsible for the actions of AI systems within their design or control. The liability frameworks outline how blames of problems are assigned reliably and adequate protection to legal, intellectual property rights, and societal exposures. A cautious approach to designing liability frameworks for governance as initiating a wrong framework might place unfair blame, stifle innovation, or even reduce safety. Changes to the general liability framework must be subject to comprehensive research establishing the failure of the existing contract, tort, and other laws.

Best practices for AI governance

Best practices for AI governance

Some best practices of AI governance are:

  • When designing responsible AI, governance processes need to be systematic and repeatable.
  • Build a diverse AI governance culture that includes cross-functional, gender, diverse racial, and underrepresented groups on creating responsible AI standards. 
  • Make constant efforts towards transparency so that any decisions made by AI are explainable.
  • Ensure measurability of the project to achieve visibility, explainability, and auditability of technical process and ethical framework.
  • Implement AIOps/MLOps best practices
  • Use responsible AI tools to inspect and test AI models and predictive maintenance. 
  • Stay conscious and learn from the responsible AI process, from fairness practices to technical references and technical ethics.
  • Review and define responsibilities at the project initiation phase.



The Future of AI Governance

Despite the growth of ethical frameworks, AI systems continue to be implemented rapidly across spheres of considerable significance by the public and private sectors. The future of politics is still uncertain. Many challenges remain, and no single initiative, country, or company can tackle these challenges alone. Emerging technologies are increasingly cross-border, and significant opportunities could be lost without some level of alignment in the regulations and norms that guide technological development and implementation across jurisdictions.

Overall, the road to the digital future is full of conflicts, but this does not mean that all technology governance must be global. Regions, states, and cities need to respond to their citizens’ specific social, economic, and cultural demands.

 

Defining comparable global levels for ethical, humanitarian, legal, and politically normative frameworks will prove decisive in managing the digital transition and searching for social inclusion. Even more, there will be a growing need to move beyond ethical principles and focus on the standards needed for algorithms, taking into consideration the geopolitical and cultural differences that arise.

As a key forum for debate and dialogue, both executive and parliamentary, it is the perfect platform to discuss the future of digital governance and respond to one of the biggest existing threats and challenges our world is facing today. There is not yet one right answer about the best roadmap or AI, but several options. We need to work together in defining which road will benefit the many.

Conclusion

We’ve explored the various aspects of AI governance, and best practices, and future outlook. I hope you learned something new from this article. Thanks for reading! 

Top MLOps guides and news in your inbox every month