AI

Multi-tool AI proxy using secure Python execution using Riza and Gemini

In this tutorial, we will take advantage of Riza’s secure Python execution as the cornerstone of a powerful, tool-powered AI proxy in Google COLAB. Starting with seamless API key management, we will configure your RIZA credentials to enable sandboxed, auditable code execution via COLAB secrets, environment variables, or hidden tips. We will integrate Riza’s Execpython tool into the Langchain proxy and define the AdvanceCallbackHandler with Google’s Gemini Generative model, which captures both tool tuning and RIZA execution logs and builds custom utilities for complex mathematical and deep text analysis.

%pip install --upgrade --quiet langchain-community langchain-google-genai rizaio python-dotenv


import os
from typing import Dict, Any, List
from datetime import datetime
import json
import getpass
from google.colab import userdata

We will install and upgrade the core library, Langchain community extensions, Google Gemini Integration, Riza’s secure execution packages and Dotenv support, and quietly in COLAB. We then import standard utilities (e.g., OS, DateTime, JSON), type comments, secure input via GetPass, and COLAB’s user data API to seamlessly manage environment variables and user secrets.

def setup_api_keys():
    """Set up API keys using multiple secure methods."""
   
    try:
        os.environ['GOOGLE_API_KEY'] = userdata.get('GOOGLE_API_KEY')
        os.environ['RIZA_API_KEY'] = userdata.get('RIZA_API_KEY')
        print("✅ API keys loaded from Colab secrets")
        return True
    except:
        pass
   
    if os.getenv('GOOGLE_API_KEY') and os.getenv('RIZA_API_KEY'):
        print("✅ API keys found in environment")
        return True
   
    try:
        if not os.getenv('GOOGLE_API_KEY'):
            google_key = getpass.getpass("🔑 Enter your Google Gemini API key: ")
            os.environ['GOOGLE_API_KEY'] = google_key
       
        if not os.getenv('RIZA_API_KEY'):
            riza_key = getpass.getpass("🔑 Enter your Riza API key: ")
            os.environ['RIZA_API_KEY'] = riza_key
       
        print("✅ API keys set securely via input")
        return True
    except:
        print("❌ Failed to set API keys")
        return False


if not setup_api_keys():
    print("⚠️  Please set up your API keys using one of these methods:")
    print("   1. Colab Secrets: Go to 🔑 in left panel, add GOOGLE_API_KEY and RIZA_API_KEY")
    print("   2. Environment: Set GOOGLE_API_KEY and RIZA_API_KEY before running")
    print("   3. Manual input: Run the cell and enter keys when prompted")
    exit()

The above cell defines a setting _api_keys() function that safely retrieves your Google Gemini and Riza API keys by first trying to secretly load them from COLAB, then loading them into existing environment variables, and finally prompting you to enter them through hidden inputs when needed via hidden inputs. If none of these methods succeed, it prints instructions on how to provide the key and exit the notebook.

from langchain_community.tools.riza.command import ExecPython
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage, AIMessage
from langchain.memory import ConversationBufferWindowMemory
from langchain.tools import Tool
from langchain.callbacks.base import BaseCallbackHandler

We import Riza’s ExecPython tool alongside LangChain’s core components for building a tool-calling agent, namely the Gemini LLM wrapper (ChatGoogleGenerativeAI), the agent executor and creation functions (AgentExecutor, create_tool_calling_agent), the prompt and message templates, conversation memory buffer, generic Tool wrapper, and the base callback handler for logging and monitor agent operations. These building blocks allow you to assemble, configure and track memory-enabled multi-tool AI proxy in COLAB.

class AdvancedCallbackHandler(BaseCallbackHandler):
    """Enhanced callback handler for detailed logging and metrics."""
   
    def __init__(self):
        self.execution_log = []
        self.start_time = None
        self.token_count = 0
   
    def on_agent_action(self, action, **kwargs):
        timestamp = datetime.now().strftime("%H:%M:%S")
        self.execution_log.append({
            "timestamp": timestamp,
            "action": action.tool,
            "input": str(action.tool_input)[:100] + "..." if len(str(action.tool_input)) > 100 else str(action.tool_input)
        })
        print(f"🔧 [{timestamp}] Using tool: {action.tool}")
   
    def on_agent_finish(self, finish, **kwargs):
        timestamp = datetime.now().strftime("%H:%M:%S")
        print(f"✅ [{timestamp}] Agent completed successfully")
   
    def get_execution_summary(self):
        return {
            "total_actions": len(self.execution_log),
            "execution_log": self.execution_log
        }


class MathTool:
    """Advanced mathematical operations tool."""
   
    @staticmethod
    def complex_calculation(expression: str) -> str:
        """Evaluate complex mathematical expressions safely."""
        try:
            import math
            import numpy as np
           
            safe_dict = {
                "__builtins__": {},
                "abs": abs, "round": round, "min": min, "max": max,
                "sum": sum, "len": len, "pow": pow,
                "math": math, "np": np,
                "sin": math.sin, "cos": math.cos, "tan": math.tan,
                "log": math.log, "sqrt": math.sqrt, "pi": math.pi, "e": math.e
            }
           
            result = eval(expression, safe_dict)
            return f"Result: {result}"
        except Exception as e:
            return f"Math Error: {str(e)}"


class TextAnalyzer:
    """Advanced text analysis tool."""
   
    @staticmethod
    def analyze_text(text: str) -> str:
        """Perform comprehensive text analysis."""
        try:
            char_freq = {}
            for char in text.lower():
                if char.isalpha():
                    char_freq[char] = char_freq.get(char, 0) + 1
           
            words = text.split()
            word_count = len(words)
            avg_word_length = sum(len(word) for word in words) / max(word_count, 1)
           
            specific_chars = {}
            for char in set(text.lower()):
                if char.isalpha():
                    specific_chars[char] = text.lower().count(char)
           
            analysis = {
                "total_characters": len(text),
                "total_words": word_count,
                "average_word_length": round(avg_word_length, 2),
                "character_frequencies": dict(sorted(char_freq.items(), key=lambda x: x[1], reverse=True)[:10]),
                "specific_character_counts": specific_chars
            }
           
            return json.dumps(analysis, indent=2)
        except Exception as e:
            return f"Analysis Error: {str(e)}"

The above cells bring together three basic parts: an AdvanceCallbackHandler that captures each tool call with a timestamp log and can summarize the total actions taken; a mathematical category that safely evaluates complex mathematical expressions to prevent unnecessary operations in a constrained environment; and a TextAnalyzer class that calculates detailed text statistics such as character frequency, word count, and average word length, and returns the result to formatted JSON.

def validate_api_keys():
    """Validate API keys before creating agents."""
    try:
        test_llm = ChatGoogleGenerativeAI(
            model="gemini-1.5-flash",  
            temperature=0
        )
        test_llm.invoke("test")
        print("✅ Gemini API key validated")
       
        test_tool = ExecPython()
        print("✅ Riza API key validated")
       
        return True
    except Exception as e:
        print(f"❌ API key validation failed: {str(e)}")
        print("Please check your API keys and try again")
        return False


if not validate_api_keys():
    exit()


python_tool = ExecPython()
math_tool = Tool(
    name="advanced_math",
    description="Perform complex mathematical calculations and evaluations",
    func=MathTool.complex_calculation
)
text_analyzer_tool = Tool(
    name="text_analyzer",
    description="Analyze text for character frequencies, word statistics, and specific character counts",
    func=TextAnalyzer.analyze_text
)


tools = [python_tool, math_tool, text_analyzer_tool]


try:
    llm = ChatGoogleGenerativeAI(
        model="gemini-1.5-flash",
        temperature=0.1,
        max_tokens=2048,
        top_p=0.8,
        top_k=40
    )
    print("✅ Gemini model initialized successfully")
except Exception as e:
    print(f"⚠️  Gemini Pro failed, falling back to Flash: {e}")
    llm = ChatGoogleGenerativeAI(
        model="gemini-1.5-flash",
        temperature=0.1,
        max_tokens=2048
    )

In this cell, we first define and run validate_api_keys() to make sure both Gemini and riza credentials work, trying to call and instantiate the Riza execpython tool using a virtual LLM. If verification fails, we will exit the notebook. We then instantiate Python_tool for secure code execution, wrap our MathTool and TextAnalyzer methods into a Langchain tool object, and collect it into the tool list. Finally, we initialize the Gemini model with custom settings (temperature, max_tokens, top_p, top_k), and gracefully back to the lighter “Flash” variant if the “Pro” configuration fails.

prompt_template = ChatPromptTemplate.from_messages([
    ("system", """You are an advanced AI assistant with access to powerful tools.


Key capabilities:
- Python code execution for complex computations
- Advanced mathematical operations
- Text analysis and character counting
- Problem decomposition and step-by-step reasoning


Instructions:
1. Always break down complex problems into smaller steps
2. Use the most appropriate tool for each task
3. Verify your results when possible
4. Provide clear explanations of your reasoning
5. For text analysis questions (like counting characters), use the text_analyzer tool first, then verify with Python if needed


Be precise, thorough, and helpful."""),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])


memory = ConversationBufferWindowMemory(
    k=5,  
    return_messages=True,
    memory_key="chat_history"
)


callback_handler = AdvancedCallbackHandler()


agent = create_tool_calling_agent(llm, tools, prompt_template)
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    memory=memory,
    callbacks=[callback_handler],
    max_iterations=10,
    early_stopping_method="generate"
)

The cell builds the agent’s “brain” and workflow: it defines a structured ChatPromptTemplate, which defines the system to indicate its tool set and inference style, sets up sliding window conversation memory to keep the last five swaps, and implements AdvanceCallbackhandler for real-time recording. It then creates the tool name proxy by binding Gemini LLM, custom tools, and prompt templates and wraps it in a proxy Executor that manages execution (up to ten steps), leverages memory to get context, streams vertical output, and stops clearly once the proxy generates the final response.

def ask_question(question: str) -> Dict[str, Any]:
    """Ask a question to the advanced agent and return detailed results."""
    print(f"n🤖 Processing: {question}")
    print("=" * 50)
   
    try:
        result = agent_executor.invoke({"input": question})
       
        output = result.get("output", "No output generated")
       
        print("n📊 Execution Summary:")
        summary = callback_handler.get_execution_summary()
        print(f"Tools used: {summary['total_actions']}")
       
        return {
            "question": question,
            "answer": output,
            "execution_summary": summary,
            "success": True
        }
   
    except Exception as e:
        print(f"❌ Error: {str(e)}")
        return {
            "question": question,
            "error": str(e),
            "success": False
        }


test_questions = [
    "How many r's are in strawberry?",
    "Calculate the compound interest on $1000 at 5% for 3 years",
    "Analyze the word frequency in the sentence: 'The quick brown fox jumps over the lazy dog'",
    "What's the fibonacci sequence up to the 10th number?"
]


print("🚀 Advanced Gemini Agent with Riza - Ready!")
print("🔐 API keys configured securely")
print("Testing with sample questions...n")


results = []
for question in test_questions:
    result = ask_question(question)
    results.append(result)
    print("n" + "="*80 + "n")


print("📈 FINAL SUMMARY:")
successful = sum(1 for r in results if r["success"])
print(f"Successfully processed: {successful}/{len(results)} questions")

Finally, we define an accessibility feature, ask_question(), which sends user queries to the agent executor, prints the problem header, captures the agent’s response (or error), and then outputs a short execution summary (how many tool calls are shown). It then provides a list of example questions covering numerical characters, calculating compound interests, analyzing word frequencies and generating Fibonacci sequences, and iterating through them, calling each sequence and collecting the results. After all the tests, it prints a concise “final summary” indicating how many queries were successfully processed, confirming that your advanced Gemini + Riza agent is up and running in COLAB.

In short, by centering the architecture of the Riza secure execution environment, we create an AI proxy that generates insightful responses through Gemini while also running arbitrary Python code in a fully sandboxed, monitored context. The integration of Riza’s Execpython tool ensures that every calculation, from advanced numerical routines to dynamic text analysis, is performed with strict security and transparency. With Langchain’s carefully curated tool calls and maintaining contextual memory buffers, we now have a modular framework ready for real-world tasks such as automated data processing, research prototyping, or educational presentations.


View notebook. All credits for this study are to the researchers on the project. Also, please feel free to follow us twitter And don’t forget to join us 99K+ ml reddit And subscribe Our newsletter.


Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button