Researchers from Renmin University and Huawei propose Memengine: a unified modular AI library for customizing LLM-based agents

LLM-based agents are increasingly used in various applications as they handle complex tasks and play multiple roles. The key component of these agents is memory, which stores and recalls information, reflects past knowledge, and makes informed decisions. Memory plays a crucial role in tasks involving long-term interactions or role-playing, by capturing past experiences and helping maintain character consistency. It supports the agent’s ability to remember past interactions with the environment and use this information to guide future behavior, making it an essential module in such systems.
Despite increasing concerns about improving storage mechanisms for LLM-based proxy, there are often different implementation strategies and a standardized framework is lacking. This split presents challenges for developers and researchers who face difficulties or compare models due to inconsistent designs. Furthermore, common features such as data retrieval and summary often re-mature between models, resulting in inefficiency. Many academic models are also deeply embedded in specific proxy architectures, making them difficult to reuse or adapt to other systems. This highlights the need for a unified modular framework for memory in LLM proxy.
Researchers from Renmin University and Huawei have developed Memengine, a unified modular library designed to support the development and deployment of advanced storage models for LLM-based agents. MEMENGINE organizes memory systems into three levels (functions, operations, and models), improving effective and reusable designs. It supports many existing memory models, allowing users to easily switch, configure and extend them. The framework also includes tools for adjusting hyperparameters, saving memory state and integrating with popular agents such as autonomous drivers. Through comprehensive documentation and open source access, Memengine aims to simplify memory model research and facilitate widespread adoption.
Memengine is a unified modular library designed to enhance memory capabilities of LLM-based proxy. Its architecture consists of three layers: a base layer with basic functions, an intermediate layer that manages core memory operations (such as storage, recall, manage and optimize information), and a collection of advanced memory models inspired by recent research. These include models such as Fumemory (fiction memory), ltmemory (semantic retrieval), gamemory (self-reflective memory) and mtmemory (tree structure memory). Each model is implemented using a standardized interface, making it easy to switch or combine. The library also provides utilities such as encoders, summaries, hounds and judges for building and customizing memory operations. Additionally, Memengine includes tools for visualization, remote deployment and automatic model selection, providing both local and server-based usage options.
Unlike many existing libraries that only support basic memory storage and retrieval, Memengine distinguishes itself by supporting advanced features such as reflection, optimization, and customizable configuration. It has a powerful configuration module that allows developers to fine-tune and prompt with static files or dynamic input. Developers can configure parameters manually from default settings or rely on automatic selection tailored to their tasks. The library also supports integration with tools such as VLLM and AUTOGPT. Memengine can be customized at the features, operations and model levels of people who build new memory models and provides a wide range of documentation and examples. Memengine provides a more comprehensive and research-consistent memory framework than other proxy and memory banks.
In short, Memengine is a unified modular library designed to support the development of advanced storage models based on LLM-based proxy. Despite the increasing use of large language model agents throughout the industry, their memory systems remain a key focus. Despite many recent advances, there is no standardized framework for implementing memory models. Memengine solves this gap by providing a flexible and scalable platform that integrates a variety of latest memory methods. It supports simple development and plug-in usage. Going forward, the authors aim to expand the framework to include multimodal memory, such as audio and visual data, for a wider range of applications.
View paper. All credits for this study are to the researchers on the project. Also, please feel free to follow us twitter And don’t forget to join us 95k+ ml reddit And subscribe Our newsletter.

Sana Hassan, a consulting intern at Marktechpost and a dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. He is very interested in solving practical problems, and he brings a new perspective to the intersection of AI and real-life solutions.
🚨Build a Genai you can trust. ⭐️Parlant is your open source engine for controlled, compliance and purposeful AI conversations – Star Parlant on Github! (Promotion)