Visit our on-demand library to view VB Transform 2023 sessions. Sign up here
The promise of artificial intelligence is finally coming to life. From healthcare to fintech, companies across all industries are rushing to implement LLMs and other forms of machine learning systems to complement their workflows and save time for other more urgent or high-value tasks. But it’s all happening so fast that many may be unaware of a key question: how do we know that the machines that make decisions aren’t leaning toward hallucinations?
In healthcare, for example, AI has the potential to predict clinical outcomes or discover drugs. If a model deviates from the correct path in such scenarios, it could provide results that could end up harming a person or worse. Nobody would want that.
This is where the concept of AI interpretability comes in. It is the process of understanding the reasoning behind decisions or predictions made by machine learning systems and making this information understandable to decision makers and other affected parties with the autonomy to make changes.
When done right, it can help teams catch unexpected behavior, allowing them to get rid of issues before they cause real damage.
Event
VB Transform 2023 on demand
Did you miss a session of VB Transform 2023? Sign up to access the on-demand library for all of our featured sessions.
Register now
But it’s far from a piece of cake.
First, let’s understand why AI interpretability is a must
As critical sectors like healthcare continue to deploy models with minimal human oversight, the interpretability of AI has become important to ensure transparency and accountability of the system used.
Transparency ensures that human operators can understand the underlying logic of the ML system and audit it for bias, accuracy, fairness, and adherence to ethical guidelines. In the meantime, accountability ensures that identified shortcomings are addressed on time. The latter is particularly critical in high-stakes areas such as automated credit scoring, medical diagnostics, and autonomous driving, where an AI’s decision can have far-reaching consequences.
Beyond that, the interpretability of AI also helps build trust and acceptance of AI systems. Essentially, when individuals can understand and validate the reasoning behind decisions made by machines, they are more likely to trust their predictions and responses, resulting in widespread acceptance and adoption. More importantly, when explanations are available, it is easier to answer questions of ethical and legal compliance, whether it is discrimination or the use of data.
Interpretability of AI is not an easy task
While the interpretability of AI has clear benefits, the complexity and opacity of modern machine learning models make it quite a challenge.
Most high-end AI applications today use deep neural networks (DNNs) that use multiple hidden layers to enable reusable modular functions and provide better efficiency in using parameters and learning the relationship between input and output. DNNs easily produce better results than shallow neural networks – often used for tasks such as linear regressions or feature extraction – with the same amount of parameters and data.
However, this architecture of multiple layers and thousands or even millions of parameters makes DNNs very opaque, making it difficult to understand how specific inputs contribute to a model’s decision. In contrast, shallow networks, with their simple architecture, are highly interpretable.
To sum up, there is often a trade-off between interpretability and predictive performance. If you opt for high-performance models, like DNNs, the system may not offer transparency, while if you opt for something simpler and more interpretable, like a shallow network, the accuracy of the results may not be as good. the height.
Finding a balance between the two continues to be a challenge for researchers and practitioners worldwide, especially given the lack of a standardized interpretability technique.
What can be done?
To find common ground, researchers are developing rule-based and interpretable models, such as decision trees and linear models, that prioritize transparency. These models offer explicit rules and understandable representations, allowing human operators to interpret their decision-making process. However, they still lack the complexity and expressiveness of more advanced models.
As an alternative, post-hoc interpretability, where one applies tools to explain the decisions of models once they have been trained, can be useful. Currently, methods such as LIME (Local Interpretable Model-Independent Explanations) and SHAP (SHapley Additive exPlanations) can provide insight into model behavior by approximating feature importance or generating local explanations. They have the potential to bridge the gap between complex models and interpretability.
Researchers can also opt for hybrid approaches that combine the strengths of interpretable models and black box models, achieving a balance between interpretability and predictive performance. These approaches rely on model-independent methods, such as LIME and surrogate models, to provide explanations without compromising the accuracy of the underlying complex model.
Interpretability of AI: the great possibilities
Going forward, the interpretability of AI will continue to evolve and play a central role in shaping a responsible and trustworthy AI ecosystem.
The key to this evolution lies in the widespread adoption of model-independent explainability techniques (applied to any machine learning model, regardless of its underlying architecture) and the automation of the training and interpretability process. . These advancements will allow users to understand and trust high-performance AI algorithms without requiring deep technical expertise. However, at the same time, it will be equally critical to balance the benefits of automation with ethical considerations and human oversight.
Finally, as model training and interpretability become more automated, the role of machine learning experts may shift to other areas, such as selecting the right models, implementing feature engineering timely and informed decision-making based on interpretability information.
They would still be there, but not to train or interpret the patterns.
Shashank Agarwal is Manager, Decision Science at CVS Health.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.
If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider writing your own article!
Learn more about DataDecisionMakers