How to design long-lasting memory and personalized agent artificial intelligence systems with decay and self-evaluation?
In this tutorial, we’ll explore how to build an intelligent agent that can remember, learn, and adapt to us over time. We implemented a Lasting memory and personalization system Use simple, rules-based logic to model how modern Agentic AI frameworks store and recall contextual information. As we progress, we’ll see how an agent’s responses change with experience, how memory decay helps prevent overload, and how personalization improves performance. Our goal is to understand, step by step, how persistence can transform a static chatbot into a context-aware, evolving digital companion. Check The complete code is here.
import math, time, random
from typing import List
class MemoryItem:
def __init__(self, kind:str, content:str, score:float=1.0):
self.kind = kind
self.content = content
self.score = score
self.t = time.time()
class MemoryStore:
def __init__(self, decay_half_life=1800):
self.items: List[MemoryItem] = []
self.decay_half_life = decay_half_life
def _decay_factor(self, item:MemoryItem):
dt = time.time() - item.t
return 0.5 ** (dt / self.decay_half_life)
We lay the foundation for the agent’s long-term memory. We define the MemoryItem class to save each piece of information and build a MemoryStore with an exponential decay mechanism. We begin to lay the foundation for the storage and aging of information, much like human memory. Check The complete code is here.
def add(self, kind:str, content:str, score:float=1.0):
self.items.append(MemoryItem(kind, content, score))
def search(self, query:str, topk=3):
scored = []
for it in self.items:
decay = self._decay_factor(it)
sim = len(set(query.lower().split()) & set(it.content.lower().split()))
final = (it.score * decay) + sim
scored.append((final, it))
scored.sort(key=lambda x: x[0], reverse=True)
return [it for _, it in scored[:topk] if _ > 0]
def cleanup(self, min_score=0.1):
new = []
for it in self.items:
if it.score * self._decay_factor(it) > min_score:
new.append(it)
self.items = new
We extended the memory system by adding methods to insert, search, and clean up old memory. We implemented a simple similarity function and a decay-based cleanup routine to enable the agent to remember relevant facts while automatically forgetting weak or outdated facts. Check The complete code is here.
class Agent:
def __init__(self, memory:MemoryStore, name="PersonalAgent"):
self.memory = memory
self.name = name
def _llm_sim(self, prompt:str, context:List[str]):
base = "OK. "
if any("prefers short" in c for c in context):
base = ""
reply = base + f"I considered {len(context)} past notes. "
if "summarize" in prompt.lower():
return reply + "Summary: " + " | ".join(context[:2])
if "recommend" in prompt.lower():
if any("cybersecurity" in c for c in context):
return reply + "Recommended: write more cybersecurity articles."
if any("rag" in c for c in context):
return reply + "Recommended: build an agentic RAG demo next."
return reply + "Recommended: continue with your last topic."
return reply + "Here's my response to: " + prompt
def perceive(self, user_input:str):
ui = user_input.lower()
if "i like" in ui or "i prefer" in ui:
self.memory.add("preference", user_input, 1.5)
if "topic:" in ui:
self.memory.add("topic", user_input, 1.2)
if "project" in ui:
self.memory.add("project", user_input, 1.0)
def act(self, user_input:str):
mems = self.memory.search(user_input, topk=4)
ctx = [m.content for m in mems]
answer = self._llm_sim(user_input, ctx)
self.memory.add("dialog", f"user said: {user_input}", 0.6)
self.memory.cleanup()
return answer, ctx
We designed an intelligent agent that utilizes memory to inform its responses. We created a simulated language model simulator that can tailor responses based on stored preferences and topics. At the same time, awareness capabilities enable agents to dynamically capture new insights about users. Check The complete code is here.
def evaluate_personalisation(agent:Agent):
agent.memory.add("preference", "User likes cybersecurity articles", 1.6)
q = "Recommend what to write next"
ans_personal, _ = agent.act(q)
empty_mem = MemoryStore()
cold_agent = Agent(empty_mem)
ans_cold, _ = cold_agent.act(q)
gain = len(ans_personal) - len(ans_cold)
return ans_personal, ans_cold, gain
Now we give agents the ability to act and evaluate themselves. We allowed it to recall memories to form contextual answers, and added a small evaluation loop to compare personalized responses to a no-memory baseline to quantify how much memory helped. Check The complete code is here.
mem = MemoryStore(decay_half_life=60)
agent = Agent(mem)
print("=== Demo: teaching the agent about yourself ===")
inputs = [
"I prefer short answers.",
"I like writing about RAG and agentic AI.",
"Topic: cybersecurity, phishing, APTs.",
"My current project is to build an agentic RAG Q&A system."
]
for inp in inputs:
agent.perceive(inp)
print("n=== Now ask the agent something ===")
user_q = "Recommend what to write next in my blog"
ans, ctx = agent.act(user_q)
print("USER:", user_q)
print("AGENT:", ans)
print("USED MEMORY:", ctx)
print("n=== Evaluate personalisation benefit ===")
p, c, g = evaluate_personalisation(agent)
print("With memory :", p)
print("Cold start :", c)
print("Personalisation gain (chars):", g)
print("n=== Current memory snapshot ===")
for it in agent.memory.items:
print(f"- {it.kind} | {it.content[:60]}... | score~{round(it.score,2)}")
Finally, we run the full demo to see our agent in action. We feed it user input, observe how it recommends personalized actions, and examine its memory snapshots. We witness the emergence of adaptive behaviors, demonstrating that persistent memory transforms static scripts into learning companions.
In summary, we show how adding memory and personalization can make our agents more human-like, capable of remembering preferences, adjusting plans, and naturally forgetting out-of-date details. We observe that even simple mechanisms such as decay and retrieval can significantly improve an agent’s relevance and response quality. Finally, we realized that persistent memory is the foundation of the next generation of Agentic AI, which can continuously learn, intelligently customize experiences, and dynamically maintain context in a completely local, offline setting.
Check The complete code is here. Please feel free to check out our GitHub page for tutorials, code, and notebooks. In addition, welcome to follow us twitter And don’t forget to join our 100k+ ML SubReddit and subscribe our newsletter. wait! Are you using Telegram? Now you can also join us via telegram.
Asif Razzaq is the CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of artificial intelligence for the benefit of society. His most recent endeavor is the launch of Marktechpost, an artificial intelligence media platform that stands out for its in-depth coverage of machine learning and deep learning news that is technically sound and easy to understand for a broad audience. The platform has more than 2 million monthly views, which shows that it is very popular among viewers.
🙌 FOLLOW MARKTECHPOST: Add us as your go-to source on Google.