Coding guide to unlocking the MEM0 memory of anthropomorphic Claude Bot: Enable context-rich dialogue

In this tutorial, we will take you through the seamless memory memory by setting up a fully functional robot in Google Colab that uses Anthropic’s Claude model with MEM0. Combining Langgraph’s intuitive state machine is a powerful memory store based on MEM0, will enable our assistants to remember past conversations, retrieve relevant details as needed, and maintain natural continuity during meetings. Whether you are building a support robot, virtual assistant, or interactive demo, this guide will provide you with the foundation for a reliable memory-driven AI experience.
!pip install -qU langgraph mem0ai langchain langchain-anthropic anthropic
First, we installed and upgraded Langgraph, MEM0 AI client, Langchain with human connectors and core human SDK to ensure we have all the latest libraries needed to build memory-driven Claude Chatbot in Google Colab. Running it upfront will avoid dependency issues and simplify the setup process.
import os
from typing import Annotated, TypedDict, List
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from langchain_anthropic import ChatAnthropic
from mem0 import MemoryClient
We bring together the core building blocks for our Colab chatbot: it loads the operating-system interface for API keys, Python’s typed dictionaries and annotation utilities for defining conversational state, LangGraph’s graph and message decorators to orchestrate chat flow, LangChain’s message classes for constructing prompts, the ChatAnthropic wrapper to call Claude, and Mem0’s client for persistent memory storage.
os.environ["ANTHROPIC_API_KEY"] = "Use Your Own API Key"
MEM0_API_KEY = "Use Your Own API Key"
We firmly inject human and MEM0 credentials into environmental and local variables to ensure that the ChatanThropic client and MEM0 memory store can correctly verify situations in our notebooks without hard-coded sensitive keys. Focusing our API keys here, we keep a clean separation between code and secrets while seamlessly accessing the Claude model and the persistent memory layer.
llm = ChatAnthropic(
model="claude-3-5-haiku-latest",
temperature=0.0,
max_tokens=1024,
anthropic_api_key=os.environ["ANTHROPIC_API_KEY"]
)
mem0 = MemoryClient(api_key=MEM0_API_KEY)
We initialize our conversational AI core: First, it creates an instance of chathanthth configured to talk to Claude 3.5 sonnet at zero temperature for deterministic reply, and authenticate using our stored Anthropic key. It then uses our MEM0 API key to insert the MEM0 MemoryClient, making our robot based on persistent vector memory storage to save and retrieve past interactions.
class State(TypedDict):
messages: Annotated[List[HumanMessage | AIMessage], add_messages]
mem0_user_id: str
graph = StateGraph(State)
def chatbot(state: State):
messages = state["messages"]
user_id = state["mem0_user_id"]
memories = mem0.search(messages[-1].content, user_id=user_id)
context = "n".join(f"- {m['memory']}" for m in memories)
system_message = SystemMessage(content=(
"You are a helpful customer support assistant. "
"Use the context below to personalize your answers:n" + context
))
full_msgs = [system_message] + messages
ai_resp: AIMessage = llm.invoke(full_msgs)
mem0.add(
f"User: {messages[-1].content}nAssistant: {ai_resp.content}",
user_id=user_id
)
return {"messages": [ai_resp]}
We define the conversation state mode and connect it to the langgraph state computer: the status type tracking message history and MEM0 user ID, and Graph = stateGraph (state) sets the flow controller. In the chatbot, the latest user messages are used to query MEM0 for relevant memories, and a context-enhanced system prompt is built, Claude generates a reply, and saves the new Exchange back to MEM0 before returning the assistant’s response.
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", "chatbot")
compiled_graph = graph.compile()
We insert the chatbot function into the execution stream of Langgraph, register it as a node named “chatbot”, and then connect the built-in startup tag to that node. So the conversation starts from there, and finally creates a self-loop edge so that each new user message is re-entered with the same logic. Then call graph.compile() and convert this node and edge settings into an optimized runnable graphical object that will automatically manage every turn of our chat session.
def run_conversation(user_input: str, mem0_user_id: str):
config = {"configurable": {"thread_id": mem0_user_id}}
state = {"messages": [HumanMessage(content=user_input)], "mem0_user_id": mem0_user_id}
for event in compiled_graph.stream(state, config):
for node_output in event.values():
if node_output.get("messages"):
print("Assistant:", node_output["messages"][-1].content)
return
if __name__ == "__main__":
print("Welcome! (type 'exit' to quit)")
mem0_user_id = "customer_123"
while True:
user_in = input("You: ")
if user_in.lower() in ["exit", "quit", "bye"]:
print("Assistant: Goodbye!")
break
run_conversation(user_in, mem0_user_id)
We bind everything together by defining run_conversation which wraps user input into langgraph state, streams it through compiling the graph to call the chatbot node, and prints out Claude’s reply. Then, __-main __ then initiates a simple repl loop prompting us to enter a message, route it through a memory-enabled chart, and exit gracefully when entering “Exit”.
All in all, we assembled a dialogue-based AI pipeline that combines cutting-edge Claude models of humans with the continuous memory function of MEM0, all carefully curated by Langgraph in Google Colab. This architecture allows our robots to recall user-specific details, adjust responses over time and provide personalized support. Start here, consider trying a richer memory regression strategy, fine-tuning Claude’s tips, or integrating other tools into your chart.
Check COLAB notebook is here. All credits for this study are to the researchers on the project. Also, please feel free to follow us twitter And don’t forget to join us 95k+ ml reddit.
Here is a brief overview of what we built in Marktechpost:
Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.