Step-by-step implementation tutorial for building modular AI workflows via API and Langgraph via Claude Sonnet 3.7 via Anthropic

In this tutorial, we provide a practical guide to implementing Langgraph, a simplified, graph-based AI orchestration framework that integrates seamlessly with Anthropic’s Claude API. With detailed, executable code optimized for Google Colab, developers will learn how to build and visualize AI workflows, as interconnected nodes perform different tasks such as generating concise answers, carefully analyzing responses, and automatically shaping technical blog content. The compact implementation highlights Langgraph’s intuitive node-graph architecture. It manages complex sequences of Cloud-driven natural language tasks, from basic question-asking schemes to advanced content generation pipelines.
from getpass import getpass
import os
anthropic_key = getpass("Enter your Anthropic API key: ")
os.environ["ANTHROPIC_API_KEY"] = anthropic_key
print("Key set:", "ANTHROPIC_API_KEY" in os.environ)
We safely prompt users to enter their human API keys using Python’s GetPass module to ensure sensitive data is displayed. It then sets this key as an environment variable (Anthropic_api_key) and confirms that it is successfully stored.
import os
import json
import requests
from typing import Dict, List, Any, Callable, Optional, Union
from dataclasses import dataclass, field
import networkx as nx
import matplotlib.pyplot as plt
from IPython.display import display, HTML, clear_output
We import basic libraries to build and visualize structured AI workflows. It includes modules for processing data (JSON, request, data elements), graph creation and visualization (NetworkX, Matplotlib), interactive notebook display (IPYTHON.DISPLAY), and type annotation (typing).
try:
import anthropic
except ImportError:
print("Installing anthropic package...")
!pip install -q anthropic
import anthropic
from anthropic import Anthropic
We make sure that anthropomorphic Python packages are available. It tries to import the module and will automatically install it in the Google Colab environment using PIP if it is not found. Once installed, it will import the human client, which is crucial for interacting with the Claude model through anthropomorphic API. 4o
@dataclass
class NodeConfig:
name: str
function: Callable
inputs: List[str] = field(default_factory=list)
outputs: List[str] = field(default_factory=list)
config: Dict[str, Any] = field(default_factory=dict)
The nodeconfig data class defines the structure of each node in the langgraph workflow. Each node has a name, executable function, optional input and output, and optional configuration dictionary to store other parameters. This setting allows for modular, reusable node definitions for graph-based AI tasks.
class LangGraph:
def __init__(self, api_key: Optional[str] = None):
self.api_key = api_key or os.environ.get("ANTHROPIC_API_KEY")
if not self.api_key:
from google.colab import userdata
try:
self.api_key = userdata.get('ANTHROPIC_API_KEY')
if not self.api_key:
raise ValueError("No API key found")
except:
print("No Anthropic API key found in environment variables or Colab secrets.")
self.api_key = input("Please enter your Anthropic API key: ")
if not self.api_key:
raise ValueError("Please provide an Anthropic API key")
self.client = Anthropic(api_key=self.api_key)
self.graph = nx.DiGraph()
self.nodes = {}
self.state = {}
def add_node(self, node_config: NodeConfig):
self.nodes[node_config.name] = node_config
self.graph.add_node(node_config.name)
for input_node in node_config.inputs:
if input_node in self.nodes:
self.graph.add_edge(input_node, node_config.name)
return self
def claude_node(self, name: str, prompt_template: str, model: str = "claude-3-7-sonnet-20250219",
inputs: List[str] = None, outputs: List[str] = None, system_prompt: str = None):
"""Convenience method to create a Claude API node"""
inputs = inputs or []
outputs = outputs or [name + "_response"]
def claude_fn(state, **kwargs):
prompt = prompt_template
for k, v in state.items():
if isinstance(v, str):
prompt = prompt.replace(f"{{{k}}}", v)
message_params = {
"model": model,
"max_tokens": 1000,
"messages": [{"role": "user", "content": prompt}]
}
if system_prompt:
message_params["system"] = system_prompt
response = self.client.messages.create(**message_params)
return response.content[0].text
node_config = NodeConfig(
name=name,
function=claude_fn,
inputs=inputs,
outputs=outputs,
config={"model": model, "prompt_template": prompt_template}
)
return self.add_node(node_config)
def transform_node(self, name: str, transform_fn: Callable,
inputs: List[str] = None, outputs: List[str] = None):
"""Add a data transformation node"""
inputs = inputs or []
outputs = outputs or [name + "_output"]
node_config = NodeConfig(
name=name,
function=transform_fn,
inputs=inputs,
outputs=outputs
)
return self.add_node(node_config)
def visualize(self):
"""Visualize the graph"""
plt.figure(figsize=(10, 6))
pos = nx.spring_layout(self.graph)
nx.draw(self.graph, pos, with_labels=True, node_color="lightblue",
node_size=1500, arrowsize=20, font_size=10)
plt.title("LangGraph Flow")
plt.tight_layout()
plt.show()
print("nGraph Structure:")
for node in self.graph.nodes():
successors = list(self.graph.successors(node))
if successors:
print(f" {node} → {', '.join(successors)}")
else:
print(f" {node} (endpoint)")
print()
def _get_execution_order(self):
"""Determine execution order based on dependencies"""
try:
return list(nx.topological_sort(self.graph))
except nx.NetworkXUnfeasible:
raise ValueError("Graph contains a cycle")
def execute(self, initial_state: Dict[str, Any] = None):
"""Execute the graph in topological order"""
self.state = initial_state or {}
execution_order = self._get_execution_order()
print("Executing LangGraph flow:")
for node_name in execution_order:
print(f"- Running node: {node_name}")
node = self.nodes[node_name]
inputs = {k: self.state.get(k) for k in node.inputs if k in self.state}
result = node.function(self.state, **inputs)
if len(node.outputs) == 1:
self.state[node.outputs[0]] = result
elif isinstance(result, (list, tuple)) and len(result) == len(node.outputs):
for i, output_name in enumerate(node.outputs):
self.state[output_name] = result[i]
print("Execution completed!")
return self.state
def run_example(question="What are the key benefits of using a graph-based architecture for AI workflows?"):
"""Run an example LangGraph flow with a predefined question"""
print(f"Running example with question: '{question}'")
graph = LangGraph()
def question_provider(state, **kwargs):
return question
graph.transform_node(
name="question_provider",
transform_fn=question_provider,
outputs=["user_question"]
)
graph.claude_node(
name="question_answerer",
prompt_template="Answer this question clearly and concisely: {user_question}",
inputs=["user_question"],
outputs=["answer"],
system_prompt="You are a helpful AI assistant."
)
graph.claude_node(
name="answer_analyzer",
prompt_template="Analyze if this answer addresses the question well: Question: {user_question}nAnswer: {answer}",
inputs=["user_question", "answer"],
outputs=["analysis"],
system_prompt="You are a critical evaluator. Be brief but thorough."
)
graph.visualize()
result = graph.execute()
print("n" + "="*50)
print("EXECUTION RESULTS:")
print("="*50)
print(f"n🔍 QUESTION:n{result.get('user_question')}n")
print(f"📝 ANSWER:n{result.get('answer')}n")
print(f"✅ ANALYSIS:n{result.get('analysis')}")
print("="*50 + "n")
return graph
The Langgraph class implements a lightweight framework for building and executing graph-based AI workflows using human Claude. It allows users to define modular nodes, i.e. Claude-driven prompt or custom transformation features, connect through dependencies, visualize the entire pipeline and execute them in topological order. The run_example function demonstrates this by building a simple question-asking and evaluation flow, demonstrating the clarity and modularity of the Langgraph architecture.
def run_advanced_example():
"""Run a more advanced example with multiple nodes for content generation"""
graph = LangGraph()
def topic_selector(state, **kwargs):
return "Graph-based AI systems"
graph.transform_node(
name="topic_selector",
transform_fn=topic_selector,
outputs=["topic"]
)
graph.claude_node(
name="outline_generator",
prompt_template="Create a brief outline for a technical blog post about {topic}. Include 3-4 main sections only.",
inputs=["topic"],
outputs=["outline"],
system_prompt="You are a technical writer specializing in AI technologies."
)
graph.claude_node(
name="intro_writer",
prompt_template="Write an engaging introduction for a blog post with this outline: {outline}nTopic: {topic}",
inputs=["topic", "outline"],
outputs=["introduction"],
system_prompt="You are a technical writer. Write in a clear, engaging style."
)
graph.claude_node(
name="conclusion_writer",
prompt_template="Write a conclusion for a blog post with this outline: {outline}nTopic: {topic}",
inputs=["topic", "outline"],
outputs=["conclusion"],
system_prompt="You are a technical writer. Summarize key points and include a forward-looking statement."
)
def assembler(state, introduction, outline, conclusion, **kwargs):
return f"# {state['topic']}nn{introduction}nn## Outlinen{outline}nn## Conclusionn{conclusion}"
graph.transform_node(
name="content_assembler",
transform_fn=assembler,
inputs=["topic", "introduction", "outline", "conclusion"],
outputs=["final_content"]
)
graph.visualize()
result = graph.execute()
print("n" + "="*50)
print("BLOG POST GENERATED:")
print("="*50 + "n")
print(result.get("final_content"))
print("n" + "="*50)
return graph
The Run_Advanced_example function generates a complete blog post by carefully orchestrating multiple Claude-Power nodes, demonstrating the more complex use of Langgraph. It first selects a topic and then uses structured Claude prompts to create an outline, introduction, and conclusion. Finally, the conversion node assembles the content into a formatted blog post. This example demonstrates how Langgraph uses modular, connected nodes to automate complex multi-step content generation tasks in a clear and executable process.
print("1. Running simple question-answering example")
question = "What are the three main advantages of using graph-based AI architectures?"
simple_graph = run_example(question)
print("n2. Running advanced blog post creation example")
advanced_graph = run_advanced_example()
Finally, we trigger the execution of two defined langgraph workflows. First, it runs a simple problem-driven example by passing a predefined problem to the run_example() function. It then starts a more advanced blog post generation workflow using run_advanced_example(). Together, these calls demonstrate the practical flexibility of Langgraph, from multi-step content automation based on timely interactions to using Anthropic’s Claude API.
In summary, we have implemented Langgraph integrated with Anthropic’s Claude API, which illustrates the ease of use of designing modular AI workflows that leverage structured, graph-based pipelines to leverage powerful language models. By visualizing task flow and separating responsibilities between nodes such as problem handling, analysis evaluation, content overview and assembly, developers can gain real-world experience in building maintainable, maintainable, scalable AI systems. Langgraph’s clear node dependencies and Claude’s complex language capabilities provide effective solutions for curating complex AI processes, especially in rapid prototyping and execution in environments such as Google Colab.
View Colab notebook. All credits for this study are to the researchers on the project. Also, please stay tuned for us twitter And don’t forget to join us 95k+ ml reddit And subscribe Our newsletter.
Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.
