Hesgoal || TOTALSPORTEK|| F1 STREAMS || SOCCER STREAMS moverightnaija

Complete code implementation to design graphical structured AI agents with Gemini for task planning, retrieval, calculation and self-evaluation

In this tutorial, we implement an AI proxy based on advanced graphs using Graphagent Framework and Gemini 1.5 Flash model. We define a directed node graph, each node responsible for a specific function: a router that decomposes tasks, controls traffic, research and math nodes to provide external evidence and calculations, authors of comprehensive answers, and authors who validate and perfect outputs by critics. We integrate Gemini with wrappers that deal with structured JSON prompts, while native Python functionality acts as a tool for secure math evaluation and document search. By executing this pipeline end-to-end, we demonstrate how to modularly reason, retrieve, and validate in a single cohesive system. Check The complete code is here.

import os, json, time, ast, math, getpass
from dataclasses import dataclass, field
from typing import Dict, List, Callable, Any
import google.generativeai as genai


try:
   import networkx as nx
except ImportError:
   nx = None

First, we will use the core Python library to data processing, timing and security assessments as well as data levels and typing helper structure state. We also loaded the Google.generativeai client to access Gemini and optionally perform graphical visualization. Check The complete code is here.

def make_model(api_key: str, model_name: str = "gemini-1.5-flash"):
   genai.configure(api_key=api_key)
   return genai.GenerativeModel(model_name, system_instruction=(
       "You are GraphAgent, a principled planner-executor. "
       "Prefer structured, concise outputs; use provided tools when asked."
   ))


def call_llm(model, prompt: str, temperature=0.2) -> str:
   r = model.generate_content(prompt, generation_config={"temperature": temperature})
   return (r.text or "").strip()

We define an assistant to configure and return a Gemini model with custom system instructions, and another function that prompts to call LLM when controlling the temperature. We use this setup to ensure that our agents consistently receive structured concise output. Check The complete code is here.

def safe_eval_math(expr: str) -> str:
   node = ast.parse(expr, mode="eval")
   allowed = (ast.Expression, ast.BinOp, ast.UnaryOp, ast.Num, ast.Constant,
              ast.Add, ast.Sub, ast.Mult, ast.Div, ast.Pow, ast.Mod,
              ast.USub, ast.UAdd, ast.FloorDiv, ast.AST)
   def check(n):
       if not isinstance(n, allowed): raise ValueError("Unsafe expression")
       for c in ast.iter_child_nodes(n): check(c)
   check(node)
   return str(eval(compile(node, "", "eval"), {"__builtins__": {}}, {}))


DOCS = [
   "Solar panels convert sunlight to electricity; capacity factor ~20%.",
   "Wind turbines harvest kinetic energy; onshore capacity factor ~35%.",
   "RAG = retrieval-augmented generation joins search with prompting.",
   "LangGraph enables cyclic graphs of agents; good for tool orchestration.",
]
def search_docs(q: str, k: int = 3) -> List[str]:
   ql = q.lower()
   scored = sorted(DOCS, key=lambda d: -sum(w in d.lower() for w in ql.split()))
   return scored[:k]

We implemented two key tools for the agent: a secure mathematical evaluator that parses and checks arithmetic expressions with AST before execution, and a simple document search that retrieves the most relevant fragments from small memory memory. We use these to provide proxy reliable computing and retrieval capabilities without external dependencies. Check The complete code is here.

@dataclass
class State:
   task: str
   plan: str = ""
   scratch: List[str] = field(default_factory=list)
   evidence: List[str] = field(default_factory=list)
   result: str = ""
   step: int = 0
   done: bool = False


def node_plan(state: State, model) -> str:
   prompt = f"""Plan step-by-step to solve the user task.
Task: {state.task}
Return JSON: {{"subtasks": ["..."], "tools": {{"search": true/false, "math": true/false}}, "success_criteria": ["..."]}}"""
   js = call_llm(model, prompt)
   try:
       plan = json.loads(js[js.find("{"): js.rfind("}")+1])
   except Exception:
       plan = {"subtasks": ["Research", "Synthesize"], "tools": {"search": True, "math": False}, "success_criteria": ["clear answer"]}
   state.plan = json.dumps(plan, indent=2)
   state.scratch.append("PLAN:n"+state.plan)
   return "route"


def node_route(state: State, model) -> str:
   prompt = f"""You are a router. Decide next node.
Context scratch:n{chr(10).join(state.scratch[-5:])}
If math needed -> 'math', if research needed -> 'research', if ready -> 'write'.
Return one token from [research, math, write]. Task: {state.task}"""
   choice = call_llm(model, prompt).lower()
   if "math" in choice and any(ch.isdigit() for ch in state.task):
       return "math"
   if "research" in choice or not state.evidence:
       return "research"
   return "write"


def node_research(state: State, model) -> str:
   prompt = f"""Generate 3 focused search queries for:
Task: {state.task}
Return as JSON list of strings."""
   qjson = call_llm(model, prompt)
   try:
       queries = json.loads(qjson[qjson.find("["): qjson.rfind("]")+1])[:3]
   except Exception:
       queries = [state.task, "background "+state.task, "pros cons "+state.task]
   hits = []
   for q in queries:
       hits.extend(search_docs(q, k=2))
   state.evidence.extend(list(dict.fromkeys(hits)))
   state.scratch.append("EVIDENCE:n- " + "n- ".join(hits))
   return "route"


def node_math(state: State, model) -> str:
   prompt = "Extract a single arithmetic expression from this task:n"+state.task
   expr = call_llm(model, prompt)
   expr = "".join(ch for ch in expr if ch in "0123456789+-*/().%^ ")
   try:
       val = safe_eval_math(expr)
       state.scratch.append(f"MATH: {expr} = {val}")
   except Exception as e:
       state.scratch.append(f"MATH-ERROR: {expr} ({e})")
   return "route"


def node_write(state: State, model) -> str:
   prompt = f"""Write the final answer.
Task: {state.task}
Use the evidence and any math results below, cite inline like [1],[2].
Evidence:n{chr(10).join(f'[{i+1}] '+e for i,e in enumerate(state.evidence))}
Notes:n{chr(10).join(state.scratch[-5:])}
Return a concise, structured answer."""
   draft = call_llm(model, prompt, temperature=0.3)
   state.result = draft
   state.scratch.append("DRAFT:n"+draft)
   return "critic"


def node_critic(state: State, model) -> str:
   prompt = f"""Critique and improve the answer for factuality, missing steps, and clarity.
If fix needed, return improved answer. Else return 'OK'.
Answer:n{state.result}nCriteria:n{state.plan}"""
   crit = call_llm(model, prompt)
   if crit.strip().upper() != "OK" and len(crit) > 30:
       state.result = crit.strip()
       state.scratch.append("REVISED")
   state.done = True
   return "end"


NODES: Dict[str, Callable[[State, Any], str]] = {
   "plan": node_plan, "route": node_route, "research": node_research,
   "math": node_math, "write": node_write, "critic": node_critic
}


def run_graph(task: str, api_key: str) -> State:
   model = make_model(api_key)
   state = State(task=task)
   cur = "plan"
   max_steps = 12
   while not state.done and state.step  plan -> route -> (research  route) & (math  route) -> write -> critic -> END
"""

We define a typed state data level to stick to tasks, plans, evidence, scratch comments, and control flags as the chart is executed. We implement node functions, planners, routers, research, maths, writers and critics. These functions will mutate the state and return to the next node tag. We then register them as nodes and iterate in Run_graph until it is finished. We also reveal ASCII_GRAPH() to visualize the control flow between research/math and finalize it through criticism. Check The complete code is here.

if __name__ == "__main__":
   key = os.getenv("GEMINI_API_KEY") or getpass.getpass("🔐 Enter GEMINI_API_KEY: ")
   task = input("📝 Enter your task: ").strip() or "Compare solar vs wind for reliability; compute 5*7."
   t0 = time.time()
   state = run_graph(task, key)
   dt = time.time() - t0
   print("n=== GRAPH ===", ascii_graph())
   print(f"n✅ Result in {dt:.2f}s:n{state.result}n")
   print("---- Evidence ----")
   print("n".join(state.evidence))
   print("n---- Scratch (last 5) ----")
   print("n".join(state.scratch[-5:]))

We define the input point of the program: we safely read the GEMINI API keys, take the task as input, and then run the graph through Run_graph. We measure execution time, print an ASCII graph of the workflow, display the final results, and output supporting evidence and scratch descriptions of the last few transparency. Check The complete code is here.

In summary, we demonstrate how graphical structured agents design deterministic workflow designs around probabilistic LLM. We observe how planner nodes perform task decomposition, routers dynamically choose between research and mathematics, and critics provide iterative improvements to fact and clarity. Gemini is the central reasoning engine, while graph nodes provide structure, security checking and transparent state management. We ended with a fully-featured proxy that demonstrates the benefits of combining graphics orchestration with modern LLM, allowing extensions such as custom toolchains, multi-transfer memory, or parallel node execution to be in more complex deployments.


Check The complete code is here. Check out ours anytime Tutorials, codes and notebooks for github pages. Also, please feel free to follow us twitter And don’t forget to join us 100K+ ml reddit And subscribe Our newsletter.


Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.

You may also like...