How I built an intelligent multi-agent system using AutoGen, LangChain, and Hugging Face to demonstrate a practical agent AI workflow
In this tutorial, we dive into the nitty-gritty of Agentic AI by uniting LangChain, AutoGen, and Hugging Face into a single, full-featured framework that runs without a paid API. We start by building a lightweight open source pipeline and then progress to structured inference, multi-step workflows, and collaborative agent interactions. As we move from LangChain chains to simulated multi-agent systems, we experience how reasoning, planning, and execution seamlessly merge to form autonomous, intelligent behavior entirely within our control and environment. Check The complete code is here.
import warnings
warnings.filterwarnings('ignore')
from typing import List, Dict
import autogen
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import HuggingFacePipeline
from transformers import pipeline
import json
print("π Loading models...n")
pipe = pipeline(
"text2text-generation",
model="google/flan-t5-base",
max_length=200,
temperature=0.7
)
llm = HuggingFacePipeline(pipeline=pipe)
print("β Models loaded!n")
We first set up the environment and bring in all the necessary libraries. We initialized the Hugging Face FLAN-T5 pipeline as a local language model to ensure it can generate coherent, context-rich text. We confirmed that everything loaded successfully, laying the foundation for the next agent experiment. Check The complete code is here.
def demo_langchain_basics():
print("="*70)
print("DEMO 1: LangChain - Intelligent Prompt Chains")
print("="*70 + "n")
prompt = PromptTemplate(
input_variables=["task"],
template="Task: {task}nnProvide a detailed step-by-step solution:"
)
chain = LLMChain(llm=llm, prompt=prompt)
task = "Create a Python function to calculate fibonacci sequence"
print(f"Task: {task}n")
result = chain.run(task=task)
print(f"LangChain Response:n{result}n")
print("β LangChain demo completen")
def demo_langchain_multi_step():
print("="*70)
print("DEMO 2: LangChain - Multi-Step Reasoning")
print("="*70 + "n")
planner = PromptTemplate(
input_variables=["goal"],
template="Break down this goal into 3 steps: {goal}"
)
executor = PromptTemplate(
input_variables=["step"],
template="Explain how to execute this step: {step}"
)
plan_chain = LLMChain(llm=llm, prompt=planner)
exec_chain = LLMChain(llm=llm, prompt=executor)
goal = "Build a machine learning model"
print(f"Goal: {goal}n")
plan = plan_chain.run(goal=goal)
print(f"Plan:n{plan}n")
print("Executing first step...")
execution = exec_chain.run(step="Collect and prepare data")
print(f"Execution:n{execution}n")
print("β Multi-step reasoning completen")
We explore the capabilities of LangChain by building smart prompt templates that allow our models to reason through tasks. We built a simple one-step chain and a multi-step reasoning process to break down complex goals into clear subtasks. We observe how LangChain enables structured thinking and transforms simple instructions into detailed, actionable responses. Check The complete code is here.
class SimpleAgent:
def __init__(self, name: str, role: str, llm_pipeline):
self.name = name
self.role = role
self.pipe = llm_pipeline
self.memory = []
def process(self, message: str) -> str:
prompt = f"You are a {self.role}.nUser: {message}nYour response:"
response = self.pipe(prompt, max_length=150)[0]['generated_text']
self.memory.append({"user": message, "agent": response})
return response
def __repr__(self):
return f"Agent({self.name}, role={self.role})"
def demo_simple_agents():
print("="*70)
print("DEMO 3: Simple Multi-Agent System")
print("="*70 + "n")
researcher = SimpleAgent("Researcher", "research specialist", pipe)
coder = SimpleAgent("Coder", "Python developer", pipe)
reviewer = SimpleAgent("Reviewer", "code reviewer", pipe)
print("Agents created:", researcher, coder, reviewer, "n")
task = "Create a function to sort a list"
print(f"Task: {task}n")
print(f"[{researcher.name}] Researching...")
research = researcher.process(f"What's the best approach to: {task}")
print(f"Research: {research[:100]}...n")
print(f"[{coder.name}] Coding...")
code = coder.process(f"Write Python code to: {task}")
print(f"Code: {code[:100]}...n")
print(f"[{reviewer.name}] Reviewing...")
review = reviewer.process(f"Review this approach: {code[:50]}")
print(f"Review: {review[:100]}...n")
print("β Multi-agent workflow completen")
We design lightweight agents powered by the same Hugging Face pipeline, with each agent assigned a specific role such as researcher, coder, or reviewer. We have these agents collaborate on a simple coding task, exchanging information and building on each other’s output. We witnessed how coordinated multi-agent workflows can simulate teamwork, creativity, and self-organization in an automated environment. Check The complete code is here.
def demo_autogen_conceptual():
print("="*70)
print("DEMO 4: AutoGen Concepts (Conceptual Demo)")
print("="*70 + "n")
agent_config = {
"agents": [
{"name": "UserProxy", "type": "user_proxy", "role": "Coordinates tasks"},
{"name": "Assistant", "type": "assistant", "role": "Solves problems"},
{"name": "Executor", "type": "executor", "role": "Runs code"}
],
"workflow": [
"1. UserProxy receives task",
"2. Assistant generates solution",
"3. Executor tests solution",
"4. Feedback loop until complete"
]
}
print(json.dumps(agent_config, indent=2))
print("nπ AutoGen Key Features:")
print(" β’ Automated agent chat conversations")
print(" β’ Code execution capabilities")
print(" β’ Human-in-the-loop support")
print(" β’ Multi-agent collaboration")
print(" β’ Tool/function callingn")
print("β AutoGen concepts explainedn")
class MockLLM:
def __init__(self):
self.responses = {
"code": "def fibonacci(n):n if n str:
prompt_lower = prompt.lower()
if "code" in prompt_lower or "function" in prompt_lower:
return self.responses["code"]
elif "explain" in prompt_lower:
return self.responses["explain"]
elif "review" in prompt_lower:
return self.responses["review"]
return self.responses["default"]
def demo_autogen_with_mock():
print("="*70)
print("DEMO 5: AutoGen with Custom LLM Backend")
print("="*70 + "n")
mock_llm = MockLLM()
conversation = [
("User", "Create a fibonacci function"),
("CodeAgent", mock_llm.generate("write code for fibonacci")),
("ReviewAgent", mock_llm.generate("review this code")),
]
print("Simulated AutoGen Multi-Agent Conversation:n")
for speaker, message in conversation:
print(f"[{speaker}]")
print(f"{message}n")
print("β AutoGen simulation completen")
We illustrate the core idea of ββAutoGen by defining a conceptual configuration of agents and their workflows. We then simulated AutoGen-style conversations using a custom simulation LLM to generate realistic and controllable responses. We realized how this framework allows multiple agents to collaboratively reason, test, and refine ideas without relying on any external APIs. Check The complete code is here.
def demo_hybrid_system():
print("="*70)
print("DEMO 6: Hybrid LangChain + Multi-Agent System")
print("="*70 + "n")
reasoning_prompt = PromptTemplate(
input_variables=["problem"],
template="Analyze this problem: {problem}nWhat are the key steps?"
)
reasoning_chain = LLMChain(llm=llm, prompt=reasoning_prompt)
planner = SimpleAgent("Planner", "strategic planner", pipe)
executor = SimpleAgent("Executor", "task executor", pipe)
problem = "Optimize a slow database query"
print(f"Problem: {problem}n")
print("[LangChain] Analyzing problem...")
analysis = reasoning_chain.run(problem=problem)
print(f"Analysis: {analysis[:120]}...n")
print(f"[{planner.name}] Creating plan...")
plan = planner.process(f"Plan how to: {problem}")
print(f"Plan: {plan[:120]}...n")
print(f"[{executor.name}] Executing...")
result = executor.process(f"Execute: Add database indexes")
print(f"Result: {result[:120]}...n")
print("β Hybrid system completen")
if __name__ == "__main__":
print("="*70)
print("π€ ADVANCED AGENTIC AI TUTORIAL")
print("AutoGen + LangChain + HuggingFace")
print("="*70 + "n")
demo_langchain_basics()
demo_langchain_multi_step()
demo_simple_agents()
demo_autogen_conceptual()
demo_autogen_with_mock()
demo_hybrid_system()
print("="*70)
print("π TUTORIAL COMPLETE!")
print("="*70)
print("nπ What You Learned:")
print(" β LangChain prompt engineering and chains")
print(" β Multi-step reasoning with LangChain")
print(" β Building custom multi-agent systems")
print(" β AutoGen architecture and concepts")
print(" β Combining LangChain + agents")
print(" β Using HuggingFace models (no API needed!)")
print("nπ‘ Key Takeaway:")
print(" You can build powerful agentic AI systems without expensive APIs!")
print(" Combine LangChain's chains with multi-agent architectures for")
print(" intelligent, autonomous AI systems.")
print("="*70 + "n")
We combine LangChainβs structured reasoning with a simple agent system to create a hybrid intelligence framework. We let LangChain analyze the problem, while the agent sequentially plans and executes the corresponding actions. We end the demo by running all modules together, showing how open source tools can be seamlessly integrated to build adaptive, autonomous AI systems.
In short, we have witnessed how Agentic AI can be transformed from concept to reality through simple modular design. We combine the depth of reasoning of Langlian with the collaborative power of intelligent agents to build an adaptive system that can think, plan and act independently. The results clearly demonstrate that it is possible to build powerful autonomous AI systems without expensive infrastructure using open source tools, creative design, and some experimentation.
Check The complete code is here. Please feel free to check out our GitHub page for tutorials, code, and notebooks. In addition, welcome to follow us twitter And donβt forget to join our 100k+ ML SubReddit and subscribe our newsletter. wait! Are you using Telegram? Now you can also join us via telegram.
Asif Razzaq is the CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of artificial intelligence for the benefit of society. His most recent endeavor is the launch of Marktechpost, an AI media platform that stands out for its in-depth coverage of machine learning and deep learning news that is technically sound and easy to understand for a broad audience. The platform has more than 2 million monthly views, which shows that it is very popular among viewers.
π FOLLOW MARKTECHPOST: Add us as your go-to source on Google.