MOMENTS LAB Gets $24 Million to Redefine Agentic AI’s Video Discovery

AI company Moments Lab redefines how organizations work with video, and has raised $24 million in new funding, led by OXX, from Orange Ventures, Kadmos, Supernova Invest and Elaia Partners. The investment will allow the company’s U.S. expansion and support the ongoing development of its proxy AI platform – a system designed to convert large amounts of video archives into searchable assets that can be searched immediately.
At the heart of Moments Lab is MXT-2, a multi-mode video AI that provides context-aware precision to watch, hear and interpret videos. Not only does it mark the content, it can also introduce it, identify people, places, logos, and even photographic elements such as shooting types and pacing. This natural language metadata turns hours of footage into structured, searchable intelligence that can be used in creative, editorial, marketing, and monetization workflows.
But the real leap is the introduction of proxy AI – an autonomous system that can plan, be rational and adapt to user intentions. Instead of simply executing instructions, it understands tips like “generate highlighted reels for social”: Take action: Pull the scene, suggest titles, choose formats, and align the output with brand’s voice or platform requirements.
“With MXT we’ve indexed videos faster than anyone else,” explain Philippe PetitpontCEO and co-founder of Moments Lab. “But with Agentic AI, we’re going to build the next layer – AI acts as teammates, doing everything from making rough cuts to discovering storylines hidden in archives.”
From search to storytelling: a platform built for speed and scale
Moments Lab is more than just an index engine. This is a full stack platform that empowers media professionals to act at the speed of stories. It starts with searching – arguably the most painful part of using videos today.
Most production teams still rely on file names, folders, and tribal knowledge to locate content. Moments Lab changes through plain text searches that behave the same as your video library. Users can simply type what they want – “the CEO talks about sustainability” or “the crowd cheers at sunset” – and retrieve the exact clip in seconds.
Key features include:
- AI video intelligence:MXT-2 not only marks content—it uses time-encoded natural language to describe it, thereby capturing what is seen, heard, and implied.
- Search for anyone can use: The platform is designed for accessibility, allowing non-technical users to search through thousands of hours of video in daily language.
- Instant cut and export: Once found for a moment, you can cut, trim and export or share in seconds – no time code handover or third-party tools are required.
- Discovery of rich metadata: Filtered by person, event, date, location, right status or any custom aspect required by the workflow.
- Quotation and sound detection: Automatically transcribe audio and highlight the most influential segments – perfect for interview videos and press conferences.
- Content Classification: The training system classifies videos by subject, tone, or use cases – from trailers to company reels to social editing.
- Translation and multilingual support: Transcribe and translate voice, making content available globally even in multilingual settings.
This end-to-end feature makes Moments Lab an essential partner for TV networks, sports rights holders, advertising agencies and global brands. Recent clients include Thomson Reuters, Amazon Advertising, Sinclair, Hurst and Benjay – all struggling to cope with an increasingly complex content library and the growing demand for speed, personalization and monetization.
Designed for integration, precisely trained
MXT-2 received training at 1.5 billion+ data points, reducing hallucinations and providing high confidence outputs that teams can rely on. With proprietary AI stacks locking metadata in an unreadable format, Moments Lab keeps everything open to text, ensuring full compatibility with downstream tools like Adobe Premiere, Final Cut Pro, BrightCove, BrightCove, YouTube, and Enterprise MAM/CMMS (integrated via API or code-free).
“The real power of our system is not only speed, but adaptability.” explain Fred Pettypenco-founder and CTO. “Whether you’re the highlight of the broadcaster’s tailoring movement or license lenses to your partners’ brands, our AI works the way the team has done – 100 times faster.”
The platform has been used to power everything from archives to real-time event editing, editing research and content licensing. Users can share secure links with collaborators, sell footage to external buyers, and even train systems to align with niche editing styles or compliance guidelines.
From startup to standard setting
Founded by Twin Brothers Frederic Petitpont and Phil Petitpont in 2016, Moments Lab started a simple question: What if you can search for a video library? What is being answered today is, and more platforms can redefine how creative and editorial teams work with the media. It has become the most awarded AI indexed in the video industry since 2023, and there is no sign of slowing down.
“When we first saw the MXT’s move, it felt like magic.” explain GökçeCeylanprincipal of Oaks. “This is exactly the product and team we are looking for – technically excellent, obsessed with customers and addressing real, growing demand. ”
With this new funding, Moments Lab is ready to lead a category that did not exist five years ago (proxy AI for video) and defines the future of content discovery.