Gibsonai Release Commemoration: Open Source SQL-native Nembor Engine for AI Agents
Memory is one of the first things that come to mind when we consider human intelligence. This is what enables us to learn from experience, adapt to new situations and make smarter decisions over time. Similarly, AI agents get smarter memory. For example, an agent can remember your past purchases, budgets, preferences, and learn to give gifts to your friends based on past conversations.
Agents usually divide tasks into steps (schedule → search → call API → parse → write), but then they may forget what happened in the early steps without memory. The agent repeats the call, gets the same data again, or misses simple rules such as “always refer to the user in the name of the user”. Because of repeating the same context over and over, agents can spend more tokens, get slower results and provide inconsistent answers. The industry has collectively spent billions on vector databases and embedded infrastructure to solve data persistence issues for AI agents. These solutions create black-frame systems that developers cannot check, query or understand why certain memories are retrieved.
The Gibsonai team built it commemorate Fix this issue. commemorate It is an open source memory engine that uses a standard SQL database (PostgreSQL/MySQL) to provide persistent intelligent memory for any LLM. In this article, we will explore how Memori responds to memory challenges and what it provides.
The stateless nature of modern AI: hidden costs
Research shows that users spend 23-31% of their time providing context they have already shared in previous conversations. For development teams using AI assistants, this translates to:
- Personal developer: ~2 hours/week repetition context
- Team of 10 people: ~ Productivity loss 20 hours/week
- Enterprise (1000 developers): ~2000 hours/week or $4M/year of redundant communication
Apart from productivity, this repetition breaks the fantasy of intelligence. AI that doesn’t remember your name after hundreds of conversations isn’t smart.
Current limitations of stateless LLM
- Not learning from interaction: Repeat each error, each preference must be reiterated
- Workflow damaged: Multi-course projects require ongoing background reconstruction
- No personalization: AI cannot adapt to individual users or teams
- Lost insight: The valuable pattern in conversation has never been captured
- Compliance Challenge: No audit trail of artificial intelligence decisions
Memory that needs to be persistent and queried
What AI really needs is Persistent, queried memory Just like every application depends on a database. However, you can’t simply use an existing application database as AI memory, because it’s not a design for context selection, relevant rankings, or injecting knowledge into the proxy’s workflow. That’s why we build a memory layer that is crucial for AI and agents to be really smart.
Why SQL is important for AI memory
SQL databases have been around for more than 50 years. They are the backbone of nearly every app we use today, from banking applications to social networks. Why? Because SQL is simple, reliable and universal.
- Every developer knows SQL. You don’t need to learn a new query language.
- Reliability after battle. SQL has operated the most critical systems in the world for decades.
- Powerful inquiry. You can easily filter, join and summarize data.
- Strong guarantee. Acid trading ensures that your data remains consistent and secure.
- Huge ecosystem. Migration, backup, dashboard and monitoring tools are everywhere.
When you are based on SQL, you stand on decades of proven technology instead of reinventing the wheel.
Disadvantages of vector databases
Today, most competing AI memory systems are built on Vector database. On paper, they sound so advanced: they let you store embeds and search by similarity. But in reality, they have hidden costs and complexities:
- Multiple moving parts. Typical settings require vector db, cache and SQL DB to function.
- Supplier locked. Your data usually lives within a proprietary system, so it is difficult to move or audit.
- Black box search. You can’t see it Why Pulling a certain memory.
- Expensive. Infrastructure and usage costs are increasing rapidly, especially at scale.
- It’s hard to debug. Embedding is not human readable, so you can’t just query SQL and check the results.
This is with commemorateSQL-first design:
aspect | Vector database/rag solution | Methods to commemorate |
---|---|---|
Need service | 3–5 (vector DB + cache + SQL) | 1 (SQL only) |
database | Vector + Cache + SQL | SQL only |
Query language | Proprietary API | Standard SQL |
debug | Black box embedded | Readable SQL query |
Backup | Complex arrangement | CP MOMEME.DB backup.db or pg_basebackup |
Data processing | Embed: ~$0.0001/1K Token (OpenAI) → Cheap Upfront | Entity Extraction: gpt-4o at ~$0.005/1K Token → higher pre-stage |
Storage cost | $0.10–0.50/gb/month (vector DBS) | ~$0.01–0.05/gb/month (SQL) |
Inquiry fee | ~$0.0004/1K vector search | Near Zero (standard SQL query) |
Infrastructure | Multiple moving parts for higher maintenance | Single database, easy to manage |
Why does it work?
If you think SQL can’t handle memory at scale, think about it again. sqliteone of the simplest SQL databases, is the most widely deployed database in the world:
- Exceed 4 billion deploy
- Run on every iPhone, Android device and web browser
- implement Trillion Query every day
If SQLITE can easily handle this huge effort, why build AI memory on expensive, distributed vector clusters?
Memorial Solution Overview
commemorate Use structured entity extraction, relationship mapping, and SQL-based retrieval to create transparent, portable, and queryable AI memory. Meng Xiaojiu uses the joint efforts of multiple agents to intelligently promote basic long-term memory for short-term storage for faster environmental injection.
With one line of code memori.enable()
Any LLM can gain the ability to remember conversations, learn and maintain cross-conference contexts from interactions. The entire memory system is stored in a standard SQLITE database (or PostgreSQL/MySQL for enterprise deployments), making it fully portable, auditable and user-owned.
The key difference
- The fundamental simplicity: One line of memory enabled for any LLM framework (OpenAI, Human, Litellm, Langchain)
- Real data ownership: Stored in a standard SQL database with full control of the user
- Complete transparency: SQL query can be used for every memory decision and can be fully explained
- Zero supplier lock: Export the entire memory as a sqlite file and move it anywhere
- Cost-efficiency: 80-90% cheaper than vector database solutions
- Compliance preparation: SQL-based storage enables audit trail, data residency and regulatory compliance
Commemorative use cases
- Smart shopping experience with AI agents that commemorate customer preferences and shopping behaviors.
- Personal AI assistant that remembers user preferences and context
- Customer support bots never ask the same question twice
- Educational mentors adapting to students’ progress
- Team knowledge management system with shared memory
- Compliance-centric application requiring a complete audit trail
Business impact indicators
Based on early implementation of our community users, we determined commemorate Help the following:
- Development time: 90% reduction in memory system implementation (hours vs. weeks)
- Infrastructure Cost: 80-90% reduction compared to vector database solutions
- Query performance: 10-50ms response time (2-4 times faster than vector similarity search)
- Memory portability: 100% in-memory data portable (0% compared to 0% in cloud vector database)
- Compliance preparation: Complete SQL audit capability from day one
- Maintenance overhead: Single database and distributed vector system
Technological innovation
commemorate Three core innovations are introduced:
- Dual-mode storage system: Combining “conscious” working memory with “automatic” intelligent search to imitate human cognitive patterns
- General Integration Layer: Automatic memory injection of any LLM without a specific framework code
- Multi-agent architecture: Multiple professional AI agents work together to use intelligent memory
Market existing solutions
There are already several ways to give AI agents some form of memory, each with its own advantages and tradeoffs:
- MEM0 → Feature-rich solution that combines REDIS, Vector database and orchestration layers to manage memory in distributed settings.
- Lanlian Memory → Provide convenient abstraction for developers within the Langchain framework.
- Vector database (Pinecone, Weaviate, Chroma) → Focus on embedded search semantic similarity search using specialized use cases.
- Custom solutions →Interior design tailored to specific business needs provides flexibility but requires a lot of maintenance.
These solutions show the various directions the industry has taken to solve memory problems. commemorate Enter the landscape with different philosophies, so that memory can enter SQL local, open source form It’s simple, transparent and ready.
Memos are built on a strong database infrastructure
In addition, AI proxy not only requires memory, but also database backbone to make that memory available and scalable. Think of an AI agent that can run queries safely in an isolated database sandbox, optimize queries over time, and automatically queries on demand, such as launching a new database to allow users to separate relevant data.
Gibsonai Backs’ powerful database infrastructure commemorate. This makes memory reliable and ready:
- Instant configuration
- Automatically on demand
- Database branch
- Database version control
- Query optimization
- Recovery Point
Strategic Vision
Contenders chase complexity through distributed vector solutions and proprietary embeddings, but commemorate Contains the reliability of SQL databases that power applications for decades.
The goal is not to build the most complex memory system, but the most practical memory system. By storing AI memory in the same database that already runs the world application, commemorate Enabling AI memory is as portable, queryable and manageable as any other application data.
Check GitHub page. Thanks to the thought leadership/resources of the Gibsonai team and support this article.
Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.