AI Observability
 • 
12
 minutes read

An In-Depth Guide To Help You Start Auditing Your AI Models

This article is a comprehensive guide to AI auditing. It covers the definition of AI auditing, its introduction, process, benefits, and challenges.

By 
Mugdha Somani
An In-Depth Guide To Help You Start Auditing Your AI Models
In this post:

Wondering why artificial intelligence audit matters? If you google artificial intelligence audit, you will get results around two main themes – audit of AI and audit with AI. The next question is - out of these two themes, which one is this article focused on? And the answer is - an audit of AI or simply an AI audit. So I encourage you to read this article if you are curious about why AI audits matter and how to conduct AI audits. And if you are searching “audit with AI” – the second theme, this article is not for you!     

So without any confusion, let’s jump straight into AI auditing.

As AI becomes mainstream with its contribution to different sectors such as health care, finance, banking, and law enforcement, its ethical consequences, legality, and safety concerns become significant.

Mature AI systems need concrete audit procedures to check outcomes, system accuracy, data sources, and algorithms.  

 

What is an AI Audit?

Auditability in the AI context refers to the preparedness of an AI system to assess its algorithms, models, data, and design processes. Such assessment of AI applications by internal and external auditors helps justify the trustworthiness of the AI system. AI auditing is a necessary practice that exhibits the responsibility of AI system design and the justifiability of predictions delivered by models. AI auditability covers

  • Evaluation of models, algorithms, and data streams
  • Analysis of operations, results, and anomalies observed
  • Technical aspects of AI systems for results accuracy
  • Ethical aspects of AI systems for fairness, legality, and privacy

Auditing AI systems is a modern approach to educate the C-suite about the value of AI adoptions, expose the risks involved to the businesses, and develop safeguard controls to avoid threats detected in audits. AI auditing defines systematic and piloted programs for better risk assessment and a high level of governance. 

Effective AI auditing requires the involvement of internal teams and third-party auditors. Enterprises sometimes need to share sensitive information to understand AI-driven functions that must be aligned with regulatory requirements or industry practices. It is recommended to keep a comprehensive record of data procurement, provenance, preprocessing, storage and lineage. Further, it includes reports on data availability, the integrity of data sources, data relevance, security aspects, and unforeseen data issues across data pipelines.    

Understanding AI audit with numbers

KPMG conducted a poll to estimate their client’s internal audit’s involvement with managing risks around their organization’s AI solutions. Here’s what they found:

  • More than 50% of respondents admitted that AI was already being used in their organization
  • 45% of respondents planned to perform an audit of their AI solutions
  • 90% of respondents accepted that AI should be applied, and those AI applications should be subjected to internal audits
  • 70% of respondents agreed that they were not clear on what their AI audit approach should be  

Why Audit AI?

AI-driven decision-making is not new to businesses. From autonomous vehicles and healthcare to banking, hospitality, and law enforcement, ML algorithms are used extensively. The purpose here is to make business decisions with limited human intervention. 

With algorithms taking a central position in business decision-making, it is business-critical to have appropriate algorithm audits. Auditing will help verify that algorithms are secure, ethical, and lawful. As AI moves from research and POC to a real-world business environment, it becomes a societal challenge to AI adopters making governance and auditing mandatory for AI success. 

AI application auditors primarily consider these two aspects while performing AI audits.

Compliance: Evaluate risks posed by AI applications to the rights and freedoms of citizens and data subjects. Comprehend the data privacy aspects and data protection principles.

Technology: Evaluate risks related to ML applications and document them in a risk and control matrix. For example, the initial phase of AI development - problem definition and strategy planning includes risks such as AI strategy not aligned with business objectives or an inappropriate governance framework that fails to address responsibility. Documenting such AI implementation risks and appropriate controls for each stage brings accountability to AI ventures.  

The following points summarize the need for AI audits:

Legal compliance

Modern AI applications are helping grow businesses with new opportunities. However, each country and continent has its well-defined legal framework and regulations to measure damages stemming from AI-based decision-making. AI audits offer a way to assure compliance of AI-driven models with legal requirements and specified protocols.  

Standardize AI

AI auditing practices help standardize and professionalize this maturing field. AI auditing practices bring industry experts, entrepreneurs, and regulators together to define AI audit standards and thresholds. Also, auditing insights provide valuable information about governance and standardization to different project stakeholders.  

Augment iterative development

We roughly divide ML lifecycle across different phases - problem definition, data collection, data preprocessing, model building, deployment, and monitoring. Though these phases seem stable, they interact and follow an iterative sequence. Each step in the ML development process needs to be audited independently to audit the entire ML system. Proper audit documentation and clarity on AI objectives help ease the iterative AI development process.  

Risk mitigation

Auditing AI system fosters confidence and trust in the AI system through a systematic approach and documentation. The auditing process checks the AI system’s compliance with regulatory, ethical, and governance requirements helping mitigate business risks posed by AI applications. It also helps predict potential unforeseen risks and strategize risk mitigation plans.  

AI Audit Process

AI audit is a systematic process across the entire ML lifecycle. In this section, we cover critical considerations of AI audit applicable to each stage of the ML project lifecycle.

AI Audit Across ML Lifecycle
AI Audit Across ML Lifecycle

AI project scope definition

  • Which other algorithms effectively address the same problem? Any potential problems faced in their implementation while driving expected results with AI
  • Regulatory constraints applied in achieving business goals with AI and without AI
  • Which factors are crucial to determining the outcomes of the algorithms?

Data capturing

  • How do AI systems obtain their data? Are data sources reliable to serve critical AI systems?
  • Is there consistency between training data and data from original data sources?
  • Is the AI system facing any issues with data sources, such as changes in data capturing methodology or quality issues of legacy systems?
  • Is there any set criterion to select data for ML model training?
  • Which other data sources are available but not chosen to train the ML model?

Data preprocessing

  • Data imputation methods used to handle missing values
  • Methods applied to select training and test datasets such as random stratification, or cross-validation
  • Methods used to normalize and standardize data

ML modeling

  • What other ML techniques address the given problem? What are the outcomes? Reasons to justify the chosen AI strategy.
  • How are ML algorithms refined? Criteria used and assumptions made to apply sophisticated AI solutions.
  • How are algorithms coded? Do they include existing and ready-to-use packages, or are they coding from scratch?

Testing

  • Metrics used to ensure the performance of ML models
  • Checking sensitivity of model outcomes with minor changes in modeling features

ML deployment

  • Methods used to deploy ML model, the inclusion of third parties for ML deployment
  • Any post-deployment review methods to ensure algorithm performance
  • Confirmation of model performance with its ability to meet set goals

ML monitoring

  • Does the AI system have an appropriate monitoring process in place to monitor model performance, drifts, and model activities?
  • Consideration of different actions taken in executing ML pipeline to ensure compliance of AI applications with laws and regulatory standards, alignment with organizational goals, and demonstration of ethical and social responsibility

Auditing AI – A Way Towards Trustworthy AI

Trustworthy AI is an umbrella term that includes different hot trends in the AI domain. A fair, responsible, and auditable AI system fosters users’ trust and confidence in AI. AI auditing practices modernize your approach to transform your AI system trustworthy. 

Let’s explore how.

Planning an AI strategy

Successful AI adoption requires a clear understanding of business objectives and AI's potential to achieve those. Decision-makers should confirm that their AI strategy is aligned with desired business goals and justifies their AI expenditures. 

A thorough AI audit plan to check AI-driven business outcomes shall help prevent different failures. Such an audit will help the business grow by reaping the benefits of AI capabilities to explore more opportunities, market growth, smoother operations, and competitive advantage.    

David Yakobovitch mentions this in one of the Linkedin articles. He interviewed Veljko Krunic, the author of  AI, on the HumAIn Podcast. During this interview, Krunic highlighted that businesses must act on the results of the AI technology, and executives must find out the benefits AI brings to the table. 

Enhancing AI accuracy

AI system’s capacity to drive the correct predictions based on accurate judgments such as making the right forecasts, offering recommendations, and categorizing data into the suitable classes represents the accuracy of a model. Model accuracy helps turn AI systems trustworthy to operate well with diverse inputs and circumstances. It helps drive reproducibility in ML experiments to produce the same results under the same conditions.      

AI system accuracy is significantly hampered by changes in data and algorithms. Biased datasets also affect the accuracy of the predictions made. Continuous monitoring of ML pipelines, algorithm evaluation, and checking potential vulnerabilities constitute better AI practices for AI accuracy audit.

Data privacy audit

In this data-driven business world, consumers always expect the safety of their information. Data privacy has become a high-priority concern for any modern AI system as it relates to the principle of damage prevention. It covers both the consumer data as well as data generated by their interaction with an AI system. Defining data access procedures and permissions constitute a better practice for data privacy compliance. 

The responsibility of business leaders increases to comply with data privacy requirements. Data privacy considers implementing privacy standards, protecting consumer rights and interests, and legal aspects around data usage. More technically, AI audit deals with the quality and integrity of data consumed, access procedures, data relevance, and methods to handle data that respect privacy.  

Enhancing security controls

AI systems are more prone to security attacks and damage by external hackers. Although AI brings incredible opportunities, it is also vulnerable to security threats and data compromise. AI systems using third-party devices with custom security protocols pose a high risk of security threats. 

AI applications equipped with tight security controls and standards help overcome different security loopholes. Different controls used to strengthen AI security posture include:

  • Compliance with external security certifications
  • Subscribe to security advisories to receive alerts
  • Code review by the team, and at least one of the reviewers is not the author of the code
  • Implement model governance policy
  • Document policies-processes and contracts for dealing with third parties, breach reporting, and escalation.
  • Staff training to understand breach reporting policy and procedures
  • Track changes made to AI system design, broader analysis of complaints, and justification to reduce the risk of the next attack

Industry Views on AI Audits

Meeting different regulatory compliances while remaining profitable - is the biggest challenge to any business. Fortunately, AI auditing helps achieve both.  While reading the article “A new wave of AI auditing startups wants to prove responsibility can be profitable,” I found meaningful discussions around enterprise AI audit plans and its need. 

Hired’s algorithmic system is developed to help employers find the right candidates from different communities and inform job seekers if they are looking for salaries below the average for a given position. Hired CTO Dave Walters said that before the NYC law passed, the company already intended to audit its systems for fairness and transparency in 2022. 

The company has yet to choose an audit service for that. Still, Walters said he expects any effective audit to enable the company to offer its algorithmic models and training data with proper security protections in place. “That third-party service will need to be able to see deep enough under the hood to understand what’s going on,” said Walters.

We explore the potential human rights impacts that may arise from the deployment of AI/ML; in context — for example, impacts on privacy, non-discrimination, freedom of expression, freedom of movement, freedom of association, security, access to culture, child rights and access to remedy,” said Allison-Hope, vice president at BSR. This firm helps clients evaluate AI systems according to human rights-related measures.

Deloitte associated with Chatterbox Labs to implement the consultancy’s Trustworthy AI framework. This framework tests and monitors AI according to trust and fairness measures.

 

AI Auditing Challenges and Future Roadmap

Moving to the final section of this comprehensive guide, we will explore challenges in the AI auditing journey and present a roadmap to plan better AI auditing practices. 

AI auditing challenges

  • Immature AI auditing frameworks and regulations
  • Limited precedents for AI use cases
  • No clarity on definitions and taxonomies of AI
  • Wide variance among AI applications and solutions
  • AI system is not matured yet and evolving continuously
  • No availability of explicit AI audit guidelines 
  • Lack of tactical starting points
  • Steep learning curve for the audit stakeholders
  • Lack of cross-team transparency when AI auditing involves third-party auditors

AI Auditing – future roadmap

To overcome auditing challenges, many AI auditing frameworks have been defined by global organizations. Check the list of such frameworks that can help you audit AI-enabled initiatives.

AI Audit Framework 

COBIT Framework

An umbrella framework to define governance and management of enterprise information and technology with the inclusion of process descriptions, base practices, and outcomes. It serves as the best starting point for an AI audit.

US Government Accountability Office AI Framework 

A framework developed by U.S Government’s Accountability Office ensures accountability and RAI in government processes. It is structured with four major principles – data, governance, performance, and monitoring  

Singapore PDPC Model AI Governance Framework 

Focuses on four key areas – internal governance, human involvement in AI-augmented decision making, operations management, and communication

IIA Artificial Intelligence Auditing Framework 

An AI auditing framework based on strategy, governance, and the human factor. It also covers data quality, performance, AI competency, infrastructure, resilience, and ethics.

COSO ERM Framework 

An AI audit framework that considers components – governance, strategy, performance, review, communication, and 20 key principles. A perfect guide for AI risk management and governance.  


Best Practices for Successful AI Auditing

  • Accept, use, and adapt AI auditing framework and regulations
  • Ensure a transparent communication system with stakeholders
  • Implement well-informed practices about application design and architecture to finalize its scope 
  • Ensure transparency in the iterative development process
  • Ensure control and governance
  • Include all stakeholders
  • Involve domain experts and specialists, as needed
  • Ensure cross-team transparency with documentation and communication

Adopting the right AI auditing framework and practicing the best auditing guidelines help you build a sustainable AI system. Enterprises should set procedures to track changes in AI systems with the right degree of automation. 

One effective practice of accommodating ML system changes is complete ML pipeline monitoring.

AI observability platforms such as Censius can help you through this.

It automates tracking the performance of production ML models and the entire ML pipeline. Using Censius, you can easily set up and configure monitors to track drift, data quality issues, and performance metrics.

Benefits of Auditing AI

Although AI auditing is a new field, it is gaining momentum in the business world with its numerous benefits: 

  • Risk mitigation 
  • Attaining legal compliance
  • Addressing algorithmic bias
  • Increased reliability 
  • Accountability

AI audit is crucial for organizations in their quest to reap the benefits of this maturing technology. It also helps understand KPIs associated with AI technology. Neglecting AI audit initiatives can impose different risks on your AI ventures. C-suite and other stakeholders should consider AI auditing implementation a top priority to build sustainable AI systems with better accountability, trust, and compliance.

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Censius AI Monitoring Platform
Automate ML Model Monitoring

Explore how Censius helps you monitor, analyze and explain your ML models

Explore Platform

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring