AI Black Box
Black box AI refers to using ML models that are not explainable by looking at their parameters, making retracing ML outputs difficult
What is the AI Black Box Challenge?
It becomes difficult to comprehend and retrace how the specific ML prediction was derived in complex ML systems. This type of computing by ML models is referred to as a 'black box' that is challenging to interpret.
In his Interpretable Machine Learning e-book, Christoph Molnar defines a black box model as an ML model that cannot be understood by looking at its parameters. A black-box model is not explainable by itself. Even data scientists or engineers can't explain how the model arrived at a specific result.
For example, complex ML models like Deep Neural Networks with thousands or millions of parameters (weights) are often black boxes. It is challenging to comprehend this model’s behavior, even when users have visibility to its structure and weights.
Another type of black-box AI includes the proprietary algorithms where details are undisclosed from AI system users for reasons like intellectual property or preventing manipulations by evil actors. Such a black-box model can be hazardous if these are handling prison sentences, treatment decisions at hospitals, or determining credit scores.
How Black Box Affects Business?
Black box AI affects businesses on several fronts:
Erroneous decisions
Erroneous decisions are taken based on predictions made by black-box models. It can negatively impact consumer safety, health, and trust.
Overlooked vulnerabilities
If traditional risk management controls and internal audits fail to pinpoint algorithmic risks, it can pose new challenges with unforeseen potential failure modes.
Compliance issues
Black box models contribute to regulatory compliance disputes if model predictions misalign with legal, social, ethical, and cultural norms.
Delays
Black-box AI systems can cause delays in resolving business issues raised due to malfunctioning algorithm-dependent systems and a lack of appropriate guidelines.
Third-party Induced Risks
Limited visibility into algorithm design, training data, and processes induce risks to ML applications when deployed for commercial reasons.
Opening the AI Black Box
Industries execute standard practices and explainability approaches to tackle the challenge of the AI black box. Here are some of them:
- Define AI risk management strategy and governance policies with appropriate inclusion of roles, responsibilities, training, and company policies.
- Review black-box algorithms and use controls to assess the entire lifecycle of black-box models.
- Perform periodic validation of algorithms to test the validity of training data, check vulnerabilities, and fine-tune model performance.
- Engage with researchers and innovators to ensure better practices and tools that help crack the black box AI challenge.
- Implement an explainability AI approach using modern tools and libraries such as LIME, SHAP, and ELI5. Other proven approaches to explainability include PDP, ICE, LOCO, ALE, and more. Using image-specific explainability tools like Class activation maps (CAMs) and others like Integrated Gradients that apply to both text and images helps unlock the ML black box.
Further Reading
The dangers of trusting black-box machine learning