AI

Prioritize trust in AI -unite.ai

Society’s dependence on artificial intelligence (AI) and machine learning (ML) applications continues to grow, redefining how information is consumed. From AI-powered chatbots to information synthesis generated by Big Language Models (LLM), society has access to more information and deeper insights than ever before. But a key issue is imminent as technology companies implement competition for AI in their value chains. Can we really trust the output of AI solutions?

Can we really trust AI output without uncertainty quantization

For a given input, the model may have generated many other equally feasible outputs. This may be due to insufficient training data, changes in training data or other reasons. When deploying models, organizations can leverage uncertainty quantification to give their end users a clearer understanding of the output of the AI/ML model they should trust more. Uncertainty quantization is the process of estimating what other outputs might be.

Imagine a model that predicts the high temperature tomorrow. This model may produce an output of 21ºC, but the uncertainty quantization applied to that output may indicate that the model can also produce outputs of 12ºC, 15ºC, or 16ºC; knowing this, what is the simple prediction of 20ºC now? Despite its potential to induce trust or consulting caution, many organizations choose to skip uncertainty quantification because of other work they need to do to implement it, and because of its demand for computing resources and speed of reasoning.

Humans such as medical diagnostic and prognostic systems involve humans as part of the decision-making process in environmental systems. By blindly trusting data from healthcare AI/ML solutions, healthcare professionals can misdiagnose patients, which can lead to below-standard health outcomes — or worse. Quantification of uncertainty allows healthcare professionals to quantitatively see when they can be cautious about AI output and should be cautious about specific predictions. Similarly, in fully autonomous systems such as autonomous vehicles, the output of the model used to estimate the distance of obstacles may lead to a crash, otherwise a crash may be avoided in the absence of uncertainty quantification in distance estimates.

Challenges of Trust in Building AI/ML Models with Monte Carlo Methods

The Monte Carlo method developed during the Manhattan project is a powerful way to perform uncertainty quantification. They involve repeated running algorithms, and the inputs are slightly different until further iterations provide no more information in the output. When the process reaches such a state, it is said to have been fused. One disadvantage of the Monte Carlo method is that they are generally slow and computationally intensive, requiring many repetitions of the calculations they consist of to obtain the fused output, and have inherent variability between these outputs. Since the Monte Carlo method uses the output of the random number generator as one of its key building blocks, even if you run Monte Carlo with many internal duplications, the results you get change when you repeat the process with the same parameters.

The way forward for trustworthiness in AI/ML models

Unlike traditional server and AI-specific accelerators, a new computing platform is being developed to directly handle empirical probability distributions, just like traditional computing platforms deal with integers and floating-point values. By deploying their AI models on these platforms, organizations can automatically implement uncertainty quantification on their pretrained models, and can also speed up other types of computing tasks traditionally using the Monte Carlo method, such as VAR computing in finance. In particular, for VAR scenarios, this new platform enables organizations to collaborate with empirical distributions built directly from real market data rather than approximating these distributions with samples generated by random number generators for more accurate analysis and faster results.

Recent computational breakthroughs significantly reduce barriers to uncertainty quantification. A research article published in Maindute 2024 by my colleagues and I in Machine Learning highlights how the next generation computing platform we developed enables uncertainty quantitative analysis to run more than 100 times more than traditional analysis based on high-end Intel-Intel-Intel-Intel-dem-Xecon at high-end Intel-Xeen-execon. Advances like this allow organizations deploying AI solutions to easily implement uncertainty quantification and perform such uncertainty quantification with low overhead.

The future of AI/ML credibility depends on advanced next-generation computing

As organizations integrate more AI solutions into society, the credibility of AI/ML will become a priority. Enterprises will no longer be able to skip implementation facilities in their AI model deployments to enable consumers to know when to process specific AI model outputs in doubt. The need for such quantification of interpretability and uncertainty is obvious, with about three-quarters showing that they would be more willing to trust AI systems if appropriate assurance mechanisms were available.

New computing technologies make it easier to implement and deploy uncertainty quantification. While industries and regulators address other challenges related to social deployment of AI, there is at least an opportunity to arouse the demands of human trust by allowing uncertainty to quantify the norms of AI deployment.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button