Open the black box: How the new AI model changes fact checking

Imagine a world where AI not only tells you whether the claim is real or wrong, but also makes its reasoning clear about the detective who provides evidence in court.
That’s the vision of a new system developed by Soochow University researchers, a model that could change the way journalists, lawyers and scientists verify information.
Seeking transparent AI
Over the years, AI fact checkers operated very much like the mysterious Oracle: they made verdicts, but rarely explained how they reached them. This “black box” problem frustrates professionals who need trust and critical understanding, and it is the logic behind automatic decision-making.
Input heterogeneous and extracted graphs pay attention to the network or HEGAT. This AI not only spits out yes or no. Instead, it highlights the exact sentences in the document supporting its conclusions and provides a window for its digital thinking. “Our goal is to open the black box of AI decision-making,” said the project’s head professor. “By showing exactly which sentences support the judgment of our model, we make its reasoning as clear as evidence of step-by-step explanation.”
How it works: Evidence Network
The secret of Hegat is its ability to read between online. Rather than scanning text in a straight line, the model creates a network of connections between words, sentences, and subtle linguistic cues, such as negative or speculative phrases. This multi-layered approach can zero it in the most important paragraphs, even if the claim is buried under the uncertainty or legal level.
Technical magic happens through advanced graphic attention mechanisms. These allow AI to weigh both details (such as a single word) and the overall situation (the overall structure of the document), thus enabling previous model efforts to achieve a nuanced understanding.
Real-world impact
The meaning is profound:
- Journalists can immediately see what parts (or contradictions) the news story supports.
- The legal team can quickly determine the terms in the contract that establishes a critical fact.
- Researchers can trace scientific claims to the exact route of evidence.
- Content hosts can make smarter decisions about reality and things.
Better than competition
When testing the leading English fact-check dataset, Hegat not only holds its own, but is better than previous systems. Its factual accuracy score rose to 66.9%, up from 64.4%, and its exact match accuracy jumped nearly 5 points to 42.9%. These gains are especially shocking in tricky situations involving speculation or explicit denial, while older models often falter. Even when the system is released on Chinese documents, it maintains its advantages, suggesting future cross-language fact-checking.
Going towards a more trustworthy AI
Perhaps most importantly, Hegat’s transparent approach addresses the growing demand for explainable AI. In high-risk areas (single mistakes) there can be serious consequences – knowing why a machine makes a decision as important as a decision itself.
The Soochow team plans to release their code and detailed comments to the public, inviting others to build on their work. This open spirit can accelerate the development of not only powerful artificial intelligence tools, but also trustworthy and responsible.
Want to know more?
As AI becomes increasingly tethered to our daily lives, innovations such as Hegat can glimpse into the future machines that can not only make decisions, but also showcase their work.
Related
If our report has been informed or inspired, please consider donating. No matter how big or small, every contribution allows us to continue to provide accurate, engaging and trustworthy scientific and medical news. Independent news takes time, energy and resources – your support ensures that we can continue to reveal the stories that matter most to you.
Join us to make knowledge accessible and impactful. Thank you for standing with us!