In the world of artificial intelligence, explainability has become a contentious topic. One view among machine learning experts is that the less a model can be interpreted, the more accurate it will be. Some fear that adoption of explainable AI will slow down the adoption of machine learning. However, explainability doesn’t seek to slow down advancements in AI - it seeks to make that advancement fairer and safer for both everyday people and businesses implementing the AI. Explainability also goes hand-in-hand with decreasing bias in AI.
How important is the interpretability of your taxi driver?
Yann Lecun, VP & Chief AI Strategist at Facebook, once asked the question, “How important is the interpretability of your taxi driver?” On the surface, this thought experiment might seem to be an explainability killer. As long as you get from point A to point B safely, who cares how and why the taxi driver got you there? But this is a red herring -- of course you want to understand your taxi driver’s choices! Otherwise, what’s to stop someone from taking a longer route that makes you late or costs your more money? Also, the taxi driver needs to be able to explain their actions to protect themselves against any legal issues.
Explainable AI aims to address the lack of information around how AI decisions are made. It is important because when an uninterpreted model fails, those in vulnerable groups are most likely to be negatively impacted, as we saw in the Gender Shades study. In the study, it was revealed that three top AI companies performed better on men than women and especially performed the worst on women with dark skin tones. Without explainability, discriminatory decisions go unchecked.
There are also growing legal issues surrounding AI that explainability seeks to address. The European Union, for example, has expanded guidelines around automated decision making rights under GDPR. Meanwhile, in the US, certain decisions like those based on creditworthiness are also subject to right to explanation laws. As regulations catch up with technologies, the need to be able to explain AI decisions will become increasingly important.
Plus, when you do get anomalous decisions from a model, that often does reflect something in the world that wasn’t picked up on during model development. It’s important that when this happens data scientists know what data or new features are needed to have a properly trained model.
A lack of clarity around AI decisions can lead to frustrations from users who aren’t able to understand why a decision was made and even legal implications if black box decisions further discrimination of protected groups. This underscores the importance of understanding AI predictions better to develop AI that are more fair and helpful for society.
Ignorance is not bliss.
There’s a real and very valid concern around the legality of implementing black box models in sensitive areas. If a police officer catches someone based on an algorithm, how do they explain that that wasn’t based on prior bias ? If an algorithm tells a doctor a patient is dying, but the AI is wrong, who’s liable for the incorrect diagnosis -- the doctor using the algorithm? The company that sold the software? or the architects of the original algorithm?
For example, look at the now-scrapped HR system Amazon built:
Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools. (Reuters)
Even though Amazon edited the programs to make them neutral to these particular terms, there was still no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory. This led to Amazon ultimately disbanding the team with recruiters not able to trust the system enough to rely on it solely, leading executives to lose hope for the project. And herein is one of biggest arguments for explainability: it allows for the safe, responsible adoption of AI. Explainability could’ve helped these developers catch this disparity prior to deployment, and consequently, a lot more women would have had opportunities to be screened.
Explainability will likely continue to be a push and pull in the AI world. However, we’re already seeing regulations like GDPR push explainable AI forward. Being able to interpret AI remains key to addressing the lack of trust around black box decisions, avoiding vulnerabilities in models, and decreasing the amount of human bias in machine learning.
FAQ:
Why is Explainable AI crucial for ethical AI development?
Explainable AI is crucial for ethical AI development because it provides transparency in how AI models make decisions. This helps ensure that AI systems are fair, accountable, and free from hidden biases, fostering trust among users and stakeholders.
How can Explainable AI improve trust in AI systems?
Explainable AI improves trust in AI systems by making their decision-making processes transparent and understandable. When users can see and comprehend how an AI model arrives at a conclusion, they are more likely to trust its outputs and rely on its recommendations.
What are some common techniques used in Explainable AI?
Common techniques used in Explainable AI include feature importance analysis, model-agnostic methods like LIME and SHAP, and visualizations such as decision trees and attention maps. These techniques help in interpreting and explaining the inner workings of AI models.
How does Explainable AI help in regulatory compliance?
Explainable AI helps in regulatory compliance by providing clear and understandable explanations for AI-driven decisions. This transparency is often required by regulations and standards to ensure that AI systems are making fair and unbiased decisions, and that their processes can be audited.
What role does Explainable AI play in AI bias mitigation?
Explainable AI plays a vital role in AI bias mitigation by revealing how and why certain decisions are made. By understanding the decision-making process, data scientists can identify and address biases in the model, leading to fairer and more equitable outcomes.