Learn about LangChain’s DeepAgents library and practical examples to understand how DeepAgents actually works
While basic large language model (LLM) agents (agents that repeatedly call external tools) are easy to create, these agents often struggle with long-term and complex tasks because they lack the ability to plan ahead and manage work over time. Their execution can be considered “superficial”.
The Deepagents library aims to overcome this limitation by implementing a common architecture inspired by advanced applications such as Deep Research and Claude Code.

This architecture provides additional depth to agents by combining four key capabilities:
- Planning tools: Allows agents to strategically break complex tasks into manageable steps before taking action.
- subagent: Enables a master agent to delegate specialized parts of a task to smaller, dedicated agents.
- access file system: Provides persistent memory to hold work in progress, comments, and final output, allowing the agent to continue where it left off.
- Detailed tips: Provide agents with clear instructions, context, and the constraints of long-term goals.
By providing these foundational components, deepagents make it easier for developers to build powerful, general-purpose agents that can efficiently plan, manage state, and execute complex workflows.
In this article, we’ll look at a practical example to see how DeepAgents actually work. Check The complete code is here.
Core capabilities of DeepAgents
1. Planning and task breakdown: DeepAgents comes with a built-in write_todos tool that helps agents break down large tasks into smaller, manageable steps. They can track progress and adjust plans as they learn new information.
2. context management: Agents can store information outside of short-term memory using file tools such as ls, read_file, write_file, and edit_file. This prevents context overflow and allows them to work smoothly on larger or more detailed tasks.
3. Subagent creation: Built-in task tools allow agents to create smaller, more focused subagents. These subagents handle specific parts of the problem without disturbing the context of the main agent.
4. long term memory: Powered by the LangGraph store, agents can remember information across sessions. This means they can recall past work, continue previous conversations, and build on earlier progress.


Set dependencies
!pip install deepagents tavily-python langchain-google-genai langchain-openai
environment variables
In this tutorial, we will use the OpenAI API key to power our Deep Agent. However, for reference, we will also show how to use the Gemini model.
You are free to choose any model provider you like – OpenAI, Gemini, Anthropic or others – as DeepAgents works seamlessly with different backends. Check The complete code is here.
import os
from getpass import getpass
os.environ['TAVILY_API_KEY'] = getpass('Enter Tavily API Key: ')
os.environ['OPENAI_API_KEY'] = getpass('Enter OpenAI API Key: ')
os.environ['GOOGLE_API_KEY'] = getpass('Enter Google API Key: ')
Import necessary libraries
import os
from typing import Literal
from tavily import TavilyClient
from deepagents import create_deep_agent
tavily_client = TavilyClient()
tool
Just like regular tools-using agents, deep agents can also be equipped with a set of tools to help them perform their tasks.
In this example, we will grant the agent access to the Tavilly search tool, which it can use to collect real-time information from the network. Check The complete code is here.
from typing import Literal
from langchain.chat_models import init_chat_model
from deepagents import create_deep_agent
def internet_search(
query: str,
max_results: int = 5,
topic: Literal["general", "news", "finance"] = "general",
include_raw_content: bool = False,
):
"""Run a web search"""
search_docs = tavily_client.search(
query,
max_results=max_results,
include_raw_content=include_raw_content,
topic=topic,
)
return search_docs
subagent
Sub-agent is one of the most powerful features of Deepin Agent. They allow a master agent to delegate specific parts of a complex task to smaller, specialized agents—each with its own focus, tools, and instructions. This helps keep the main agent’s context clean and organized while still allowing for deep, focused work on individual subtasks.
In our example, we define two subagents:
- policy research institute — Professional researchers who conduct in-depth analysis of global AI policies, regulations, and ethical frameworks. It uses the internet_search tool to collect real-time information and generate well-structured professional reports.
- policy critic agent — Editorial agents are responsible for reviewing generated reports for accuracy, completeness, and tone. It ensures that the research is balanced, factual and consistent with the regional legal framework.
Together, these subagents enable the master depth agent to perform research, analysis, and quality reviews in a structured, modular workflow. Check The complete code is here.
sub_research_prompt = """
You are a specialized AI policy researcher.
Conduct in-depth research on government policies, global regulations, and ethical frameworks related to artificial intelligence.
Your answer should:
- Provide key updates and trends
- Include relevant sources and laws (e.g., EU AI Act, U.S. Executive Orders)
- Compare global approaches when relevant
- Be written in clear, professional language
Only your FINAL message will be passed back to the main agent.
"""
research_sub_agent = {
"name": "policy-research-agent",
"description": "Used to research specific AI policy and regulation questions in depth.",
"system_prompt": sub_research_prompt,
"tools": [internet_search],
}
sub_critique_prompt = """
You are a policy editor reviewing a report on AI governance.
Check the report at `final_report.md` and the question at `question.txt`.
Focus on:
- Accuracy and completeness of legal information
- Proper citation of policy documents
- Balanced analysis of regional differences
- Clarity and neutrality of tone
Provide constructive feedback, but do NOT modify the report directly.
"""
critique_sub_agent = {
"name": "policy-critique-agent",
"description": "Critiques AI policy research reports for completeness, clarity, and accuracy.",
"system_prompt": sub_critique_prompt,
}
System prompt
Deepin Agent includes a built-in system prompt as its core command set. This prompt is inspired by the system prompts used in Claude Code and is intended to be more general, providing guidance on how to use built-in tools such as planning, file system operations, and subagent orchestration.
However, while the default system prompt enables Deep Agent to work out of the box, it is highly recommended that you define a custom system prompt that suits your specific use case. Just-in-time design plays a crucial role in shaping the reasoning, structure, and overall performance of an agent.
In our example, we define a custom prompt called policy_research_instructions that turns the agent into an expert AI policy researcher. It clearly outlines the step-by-step workflow – saving the issue, analyzing using the Research subagent, writing the report and optionally calling the Criticism subagent for review. It also enforces best practices such as Markdown formatting, citation style, and professional tone to ensure the final report meets high-quality policy standards. Check The complete code is here.
policy_research_instructions = """
You are an expert AI policy researcher and analyst.
Your job is to investigate questions related to global AI regulation, ethics, and governance frameworks.
1️⃣ Save the user's question to `question.txt`
2️⃣ Use the `policy-research-agent` to perform in-depth research
3️⃣ Write a detailed report to `final_report.md`
4️⃣ Optionally, ask the `policy-critique-agent` to critique your draft
5️⃣ Revise if necessary, then output the final, comprehensive report
When writing the final report:
- Use Markdown with clear sections (## for each)
- Include citations in [Title](URL) format
- Add a ### Sources section at the end
- Write in professional, neutral tone suitable for policy briefings
"""
main agent
Here we define our main deep agent using the create_deep_agent() function. We initialize the model with OpenAI’s gpt-4obut as the commented out line shows, you can easily switch to Google Gemini 2.5 Flash You can select the model if you wish. The agent is configured with the internet_search tool, our custom policy_research_instructions system prompts, and two subagents – one for in-depth research and one for criticism.
By default, DeepAgents internally uses Claude Sonnet 4.5 If not explicitly specified, this is used as its model, but the library allows full flexibility to integrate OpenAI, Gemini, Anthropic or other LLMs supported by LangChain. Check The complete code is here.
model = init_chat_model(model="openai:gpt-4o")
# model = init_chat_model(model="google_genai:gemini-2.5-flash")
agent = create_deep_agent(
model=model,
tools=[internet_search],
system_prompt=policy_research_instructions,
subagents=[research_sub_agent, critique_sub_agent],
)
call agent
query = "What are the latest updates on the EU AI Act and its global impact?"
result = agent.invoke({"messages": [{"role": "user", "content": query}]})
Check The complete code is here. Please feel free to check out our GitHub page for tutorials, code, and notebooks. In addition, welcome to follow us twitter And don’t forget to join our 100k+ ML SubReddit and subscribe our newsletter. wait! Are you using Telegram? Now you can also join us via telegram.

I am a Civil Engineering graduate (2022) from Jamia Millia Islamia, New Delhi and I am very interested in data science, especially neural networks and their applications in various fields.
🙌 FOLLOW MARKTECHPOST: Add us as your go-to source on Google.