AI

Hirundo raises $8 million to solve AI illusion users

Hirundo, the first startup dedicated to machine learning, has raised $8 million in seed funding to address some of the most pressing challenges in artificial intelligence: hallucinations, biases, and embedded data vulnerabilities. The round is led by Maverick Ventures Israel, from Superseed, Alpha Intelligence Capital, Tachles VC, AI.Fund and Plun and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Plug and Pline Center.

Let AI forget: Machine’s promise

Unlike traditional AI tools that focus on refining or filtering AI output, Hirundo’s core innovation is Machines don’t learn– A technique that allows AI models to “forget” specific knowledge or behavior after training. This approach allows businesses to surgically remove hallucinations, biases, personal or proprietary data, and adversarial vulnerabilities in deploying AI models without having to re-examine them from scratch. Retraining large-scale models can take weeks and millions of dollars; Hirundo offers a more efficient alternative.

Hirundo compares this process to AI Neurosurgery: The company accurately points out the location of the undesired output in the parameters of the model and deletes it accurately while preserving performance. This groundbreaking technology enables organizations to remediate models in production environments and deploy AI with greater confidence.

Why AI hallucinations are so dangerous

AI hallucinations are the tendency of models to produce false or misleading information that sounds reasonable or even factual. These hallucinations are particularly problematic in corporate environments where misinformation-based decisions can lead to legal exposure, operational errors and reputational damage. Research shows that 58% to 82% of the “facts” generated by AI for legal queries contain some type of hallucination.

Despite efforts to use guardrails or fine-tuning to minimize hallucinations, these methods often mask problems rather than eliminate them. Guardrails work like filters, and fine-tuning usually doesn’t eliminate the root cause, especially when hallucinations penetrate deep into the degree weight of the model. Hirundo goes beyond this by actually deleting the behavior or knowledge of the model itself.

Scalable platform for any AI stack

Hirundo’s platform is designed for flexibility and enterprise-level deployment. It integrates with generation and non-generating systems for a wide range of data types (natural language, vision, radar, lidar, expression, voice and timelines). The platform automatically detects error tag items, outliers and ambiguities in the training data. It then allows the user to debug a specific failure output and trace it back to problematic training data or learned behavior that can be deleted immediately.

All of this has been achieved No workflow changes. Hirundo’s SOC-2 certification system can be developed locally through SaaS, Private Cloud (VPC), making it suitable for sensitive environments such as finance, healthcare and defense.

Showing effects between models

The company has shown strong performance improvements in the popular Large Language Model (LLMS). In tests using Llama and DeepSeek, Hirundo’s hallucination was reduced by 55%, bias was reduced by 70%, and successful rapid injection attacks were reduced by 85%. These results have been verified by independent benchmarks such as Halueval, purplellama, and bias Bench Markes Q&A.

While the current solution is good with open source models such as Llama, Mistral, and Gemma, Hirundo is actively expanding support for closed models such as Chatgpt and Claude. This makes their technology suitable for the entire Enterprise LLMS.

Founder with academic and industry depth

Hirundo is founded in 2023 by three experts at the intersection of academic and corporate AI. CEO Ben Luria is a Rhodes scholar and former visitor at Oxford who founded Fintech startup and co-founded Scholarsil, a nonprofit that supports higher education. Hirundo CTO Michael Leybovich is a graduate student at The Technion and award-winning R&D officer 324. The company’s chief scientist Professor ODED Shmueli is the Dean of Computer Science for the technology and holds research positions at IBM, HP, AT&T, and more.

Their collective experience covers basic AI research, real-world deployment and ensuring data management, giving them unique qualifications to address the current reliability crisis in the AI ​​industry.

Investors support a trustworthy AI future

This round of investors aligns with Hirundo’s vision, which is to build trusted, enterprise-ready AI. Yaron CarniThe founder of Maverick Ventures Israel pointed out that there is a urgent need for intelligence that can eliminate hallucinations or biases before causing real-world harm. “If hallucinations or biased intelligence are not removed from AI, we will eventually distort the results and encourage distrust,” he said. “Hirundo provides an AI triage to build innocence or data based on discriminatory sources and revolutionize the possibilities of AI.”

Beyond the Executive Partner, Crazy Jensonresponding to this view: “We invest in outstanding AI companies to transform industry verticals, but this transformation is only as powerful as the model itself. Hirundo’s machine-unlearning approach addresses key gaps in the AI ​​development lifecycle.”

Respond to the increasingly stringent challenges of AI deployment

As AI systems are increasingly integrated into critical infrastructures, concerns about hallucinations, biases, and embedded sensitive data are becoming increasingly difficult to ignore. These issues pose significant risks in high-risk environments ranging from finance to healthcare and defense.

Machine learning is becoming a key tool in the AI ​​industry’s increasing response to model reliability and security. As hallucinations, embedded biases and exposure to sensitive data increasingly undermine trust in deployed AI systems, Unlerning provides a way to mitigate these risks directly –back The model is trained and used.

Computers on machines do not rely on screening or surface-level repair, but are able to target the removal of problematic behavior and data from models in production. This approach gains traction between businesses and government agencies, seeking scalable, solutions that are in line with high-risk applications.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button