AI

AI Agents with UAGENTS and Google Gemini Architectural Event Driven: A Guide to Modular Python Implementation

In this tutorial, we demonstrate how to use it UAGENT Building a framework for lightweight, event-driven AI proxy architecture on top of Google’s Gemini API. First, apply Nest_Asyncio to enable nested event loops, then configure the Gemini API keys and instantiate the Genai client. Next, we will define our communication contract, ask and answer the Pydantic model, and rotate two UAGENTs: a “Gemini_agent” to listen to incoming question messages, call the Gemini “Flash” model to generate an answer, and issue an answer message; and a “client_agent” triggers a query at startup and processes incoming answers. Finally, we will learn how to run these agents simultaneously using Python’s multiprocessing utility and gracefully shut down the event loop after the exchange is complete, which illustrates Uagents’ seamless orchestration inter-agent messaging.

!pip install -q uagents google-genai

We installed the Uagents Framework and Google Genai client library to provide the necessary tools for using Gemini to build and run event-driven AI agents. The Q logo runs quietly to keep the laptop output clean. Check The notebook is here

import os, time, multiprocessing, asyncio
import nest_asyncio  
from google import genai
from pydantic import BaseModel, Field
from uagents import Agent, Context


nest_asyncio.apply()

We set up the Python environment by importing required modules, system utilities (OS, time, multiprocessing, asynchronous), Nest_asyncio, for enabling nested event loops (key in notebooks), Google Genai client, Pydantic, Pydantic and Core Uagents classes for architecture validation. Finally, nest_asyncio.apply() patches the event loop so that you can seamlessly run an asynchronous UAGENES workflow in an interactive environment. Check The notebook is here

os.environ["GOOGLE_API_KEY"] = "Use Your Own API Key Here"


client = genai.Client()

Here we set up the Gemini API key in the environment. Make sure to replace the placeholder with your actual key and then initialize the Genai client, which will process all subsequent requests to handle Google’s Gemini model. This step ensures that our agents access the generated content via API authentication.

class Question(BaseModel):
    question: str = Field(...)


class Answer(BaseModel):
    answer: str = Field(...)

These Pydantic models define the structured message formats that our agents will exchange with each other. The question model comes with a question string field and the answer model comes with an answer string field. By using Pydantic, we can automatically validate and serialize incoming and outgoing messages to ensure that each proxy is always used with good data.

ai_agent = Agent(
    name="gemini_agent",
    seed="agent_seed_phrase",
    port=8000,
    endpoint=["
)


@ai_agent.on_event("startup")
async def ai_startup(ctx: Context):
    ctx.logger.info(f"{ai_agent.name} listening on {ai_agent.address}")


def ask_gemini(q: str) -> str:
    resp = client.models.generate_content(
        model="gemini-2.0-flash",
        contents=f"Answer the question: {q}"
    )
    return resp.text


@ai_agent.on_message(model=Question, replies=Answer)
async def handle_question(ctx: Context, sender: str, msg: Question):
    ans = ask_gemini(msg.question)
    await ctx.send(sender, Answer(answer=ans))

In this block we instantiate the “gemini_agent” with a unique name (for deterministic identity), the listening port and the unique name (for deterministic identity). We then register a startup event handler that will log in when the agent is ready to ensure visibility lifecycle. Synchronous assistant ask_gemini wraps the “Flash” model of Genai Client Call to Gemini. Meanwhile, the @ai_agent.on_message handler deals with incoming question messages, calls ask_gemini and asynchronously sends the verified answer payload to the original sender. Check The notebook is here

client_agent = Agent(
    name="client_agent",
    seed="client_seed_phrase",
    port=8001,
    endpoint=["
)


@client_agent.on_event("startup")
async def ask_on_start(ctx: Context):
    await ctx.send(ai_agent.address, Question(question="What is the capital of France?"))


@client_agent.on_message(model=Answer)
async def handle_answer(ctx: Context, sender: str, msg: Answer):
    print("📨 Answer from Gemini:", msg.answer)
    # Use a more graceful shutdown
    asyncio.create_task(shutdown_loop())


async def shutdown_loop():
    await asyncio.sleep(1)  # Give time for cleanup
    loop = asyncio.get_event_loop()
    loop.stop()

We set up a “client_agent” which after that starts, sends a question to the gemini_agent, asks the French capital, then listens to the answer, prints the received response, and gracefully closes the event loop after a brief delay. Check The notebook is here

def run_agent(agent):
    agent.run()


if __name__ == "__main__":
    p = multiprocessing.Process(target=run_agent, args=(ai_agent,))
    p.start()
    time.sleep(2)  


    client_agent.run()


    p.join()

Finally, we define a secondary run_agent function that calls Agent.Run() and then starts Gemini_agent in its process using Python’s multiprocessing. After giving a moment of rotation, it runs client_agent in the main process, blocking until the answer round trip is complete, and finally joins the background process to ensure the cleaning is closed.

In short, with this UAgents-centric tutorial, we now have a clear blueprint for creating modular AI services that communicate through well-defined event hooks and message patterns. You’ve seen how Uagents simplifies the lifecycle management of agents, registering startup events, processing incoming messages, and sending structured replies without any mockup network code. From here, you can extend your UAGENTS setup to include more complex conversation workflows, multiple message types, and dynamic proxy discovery.


Check The notebook is here. All credits for this study are to the researchers on the project. Also, please stay tuned for us twitter And don’t forget to join us 100K+ ml reddit And subscribe Our newsletter.


Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button