Building advanced multi-agent AI workflows by leveraging Autogen and semantic kernels
In this tutorial, we will take you through seamless integration of Autogen and semantic kernels with Google’s Gemini Flash model. We first set up GeminiWrapper and Semanticckernelgeminiplugin courses to bridge Gemini’s generational power using Autogen’s multi-agent orchestration. From there, we configured expert agents from code reviewers to creative analysts, showing how we can leverage Autogen’s Autogen convection API and the decorative capabilities of the semantic kernel for text analysis, summary, code review and creative problem solving. By combining Autogen’s powerful proxy framework with a functionally driven approach to the semantic kernel, we have created a high-level AI assistant that adapts to a variety of tasks with structured, actionable insights.
!pip install pyautogen semantic-kernel google-generativeai python-dotenv
import os
import asyncio
from typing import Dict, Any, List
import autogen
import google.generativeai as genai
from semantic_kernel import Kernel
from semantic_kernel.functions import KernelArguments
from semantic_kernel.functions.kernel_function_decorator import kernel_function
We first install core dependencies: Pyautogen, Smantic-Kernel, Google-generativeai, and Python-Dotenv to ensure we provide all the necessary libraries for our multi-door and semantic functionality setup. We then use the required Python modules (OS, Asyncio, typing) and Autogen for proxy orchestration, Genai accessed by GEMINI API, and semantic kernel classes and decorators to define our AI capabilities.
GEMINI_API_KEY = "Use Your API Key Here"
genai.configure(api_key=GEMINI_API_KEY)
config_list = [
{
"model": "gemini-1.5-flash",
"api_key": GEMINI_API_KEY,
"api_type": "google",
"api_base": "
}
]
We define the Gemini_api_key placeholder and configure the Genai client immediately to authenticate all subsequent Gemini calls. We then build a config_list with Gemini Flash model settings, model name, API keys, endpoint types, and base URLs, which we will hand over to our LLM interactive proxy.
class GeminiWrapper:
"""Wrapper for Gemini API to work with AutoGen"""
def __init__(self, model_name="gemini-1.5-flash"):
self.model = genai.GenerativeModel(model_name)
def generate_response(self, prompt: str, temperature: float = 0.7) -> str:
"""Generate response using Gemini"""
try:
response = self.model.generate_content(
prompt,
generation_config=genai.types.GenerationConfig(
temperature=temperature,
max_output_tokens=2048,
)
)
return response.text
except Exception as e:
return f"Gemini API Error: {str(e)}"
We encapsulate all Gemini Flash interactions in the GeminiWrapper class, where we initialize the GenerativeModel for the selected model and reveal a simple Generate_Response method. In this method, we pass the hint and temperature into Gemini’s Generate_content API (limited to 2048 tokens) and then return an error in the original text or format.
class SemanticKernelGeminiPlugin:
"""Semantic Kernel plugin using Gemini Flash for advanced AI operations"""
def __init__(self):
self.kernel = Kernel()
self.gemini = GeminiWrapper()
@kernel_function(name="analyze_text", description="Analyze text for sentiment and key insights")
def analyze_text(self, text: str) -> str:
"""Analyze text using Gemini Flash"""
prompt = f"""
Analyze the following text comprehensively:
Text: {text}
Provide analysis in this format:
- Sentiment: [positive/negative/neutral with confidence]
- Key Themes: [main topics and concepts]
- Insights: [important observations and patterns]
- Recommendations: [actionable next steps]
- Tone: [formal/informal/technical/emotional]
"""
return self.gemini.generate_response(prompt, temperature=0.3)
@kernel_function(name="generate_summary", description="Generate comprehensive summary")
def generate_summary(self, content: str) -> str:
"""Generate summary using Gemini's advanced capabilities"""
prompt = f"""
Create a comprehensive summary of the following content:
Content: {content}
Provide:
1. Executive Summary (2-3 sentences)
2. Key Points (bullet format)
3. Important Details
4. Conclusion/Implications
"""
return self.gemini.generate_response(prompt, temperature=0.4)
@kernel_function(name="code_analysis", description="Analyze code for quality and suggestions")
def code_analysis(self, code: str) -> str:
"""Analyze code using Gemini's code understanding"""
prompt = f"""
Analyze this code comprehensively:
```
{code}
```
Provide analysis covering:
- Code Quality: [readability, structure, best practices]
- Performance: [efficiency, optimization opportunities]
- Security: [potential vulnerabilities, security best practices]
- Maintainability: [documentation, modularity, extensibility]
- Suggestions: [specific improvements with examples]
"""
return self.gemini.generate_response(prompt, temperature=0.2)
@kernel_function(name="creative_solution", description="Generate creative solutions to problems")
def creative_solution(self, problem: str) -> str:
"""Generate creative solutions using Gemini's creative capabilities"""
prompt = f"""
Problem: {problem}
Generate creative solutions:
1. Conventional Approaches (2-3 standard solutions)
2. Innovative Ideas (3-4 creative alternatives)
3. Hybrid Solutions (combining different approaches)
4. Implementation Strategy (practical steps)
5. Potential Challenges and Mitigation
"""
return self.gemini.generate_response(prompt, temperature=0.8)
We encapsulate semantic kernel logic in Semanticckernelgeminiplugin, where we initialize the kernel and GeminiWrapper to power custom AI functions. Using the @kernel_function decorator, we declare methods such as Analyze_Text, generate_summary, code_analysis, and Creative_solution, each of which builds a structured hint and delegates the heavy hints to Gemini flash. This plugin allows us to seamlessly register and call advanced AI operations in a semantic kernel environment.
class AdvancedGeminiAgent:
"""Advanced AI Agent using Gemini Flash with AutoGen and Semantic Kernel"""
def __init__(self):
self.sk_plugin = SemanticKernelGeminiPlugin()
self.gemini = GeminiWrapper()
self.setup_agents()
def setup_agents(self):
"""Initialize AutoGen agents with Gemini Flash"""
gemini_config = {
"config_list": [{"model": "gemini-1.5-flash", "api_key": GEMINI_API_KEY}],
"temperature": 0.7,
}
self.assistant = autogen.ConversableAgent(
name="GeminiAssistant",
llm_config=gemini_config,
system_message="""You are an advanced AI assistant powered by Gemini Flash with Semantic Kernel capabilities.
You excel at analysis, problem-solving, and creative thinking. Always provide comprehensive, actionable insights.
Use structured responses and consider multiple perspectives.""",
human_input_mode="NEVER",
)
self.code_reviewer = autogen.ConversableAgent(
name="GeminiCodeReviewer",
llm_config={**gemini_config, "temperature": 0.3},
system_message="""You are a senior code reviewer powered by Gemini Flash.
Analyze code for best practices, security, performance, and maintainability.
Provide specific, actionable feedback with examples.""",
human_input_mode="NEVER",
)
self.creative_analyst = autogen.ConversableAgent(
name="GeminiCreativeAnalyst",
llm_config={**gemini_config, "temperature": 0.8},
system_message="""You are a creative problem solver and innovation expert powered by Gemini Flash.
Generate innovative solutions, and provide fresh perspectives.
Balance creativity with practicality.""",
human_input_mode="NEVER",
)
self.data_specialist = autogen.ConversableAgent(
name="GeminiDataSpecialist",
llm_config={**gemini_config, "temperature": 0.4},
system_message="""You are a data analysis expert powered by Gemini Flash.
Provide evidence-based recommendations and statistical perspectives.""",
human_input_mode="NEVER",
)
self.user_proxy = autogen.ConversableAgent(
name="UserProxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=2,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
llm_config=False,
)
def analyze_with_semantic_kernel(self, content: str, analysis_type: str) -> str:
"""Bridge function between AutoGen and Semantic Kernel with Gemini"""
try:
if analysis_type == "text":
return self.sk_plugin.analyze_text(content)
elif analysis_type == "code":
return self.sk_plugin.code_analysis(content)
elif analysis_type == "summary":
return self.sk_plugin.generate_summary(content)
elif analysis_type == "creative":
return self.sk_plugin.creative_solution(content)
else:
return "Invalid analysis type. Use 'text', 'code', 'summary', or 'creative'."
except Exception as e:
return f"Semantic Kernel Analysis Error: {str(e)}"
def multi_agent_collaboration(self, task: str) -> Dict[str, str]:
"""Orchestrate multi-agent collaboration using Gemini"""
results = {}
agents = {
"assistant": (self.assistant, "comprehensive analysis"),
"code_reviewer": (self.code_reviewer, "code review perspective"),
"creative_analyst": (self.creative_analyst, "creative solutions"),
"data_specialist": (self.data_specialist, "data-driven insights")
}
for agent_name, (agent, perspective) in agents.items():
try:
prompt = f"Task: {task}nnProvide your {perspective} on this task."
response = agent.generate_reply([{"role": "user", "content": prompt}])
results[agent_name] = response if isinstance(response, str) else str(response)
except Exception as e:
results[agent_name] = f"Agent {agent_name} error: {str(e)}"
return results
def run_comprehensive_analysis(self, query: str) -> Dict[str, Any]:
"""Run comprehensive analysis using all Gemini-powered capabilities"""
results = {}
analyses = ["text", "summary", "creative"]
for analysis_type in analyses:
try:
results[f"sk_{analysis_type}"] = self.analyze_with_semantic_kernel(query, analysis_type)
except Exception as e:
results[f"sk_{analysis_type}"] = f"Error: {str(e)}"
try:
results["multi_agent"] = self.multi_agent_collaboration(query)
except Exception as e:
results["multi_agent"] = f"Multi-agent error: {str(e)}"
try:
results["direct_gemini"] = self.gemini.generate_response(
f"Provide a comprehensive analysis of: {query}", temperature=0.6
)
except Exception as e:
results["direct_gemini"] = f"Direct Gemini error: {str(e)}"
return results
We add end-to-end AI orchestration to the advanced geminiagent class, where we initialize the semantic kernel plugin, Gemini wrapper, and configure a set of expert Autogen agents (assistant, code reviewer, creative analyst, data expert and user agent). With a simple approach for semantic kernel bridging, multi-agent collaboration and Decter Gemini calls, we enable a seamless, comprehensive analytics pipeline for any user query.
def main():
"""Main execution function for Google Colab with Gemini Flash"""
print("🚀 Initializing Advanced Gemini Flash AI Agent...")
print("⚡ Using Gemini 1.5 Flash for high-speed, cost-effective AI processing")
try:
agent = AdvancedGeminiAgent()
print("✅ Agent initialized successfully!")
except Exception as e:
print(f"❌ Initialization error: {str(e)}")
print("💡 Make sure to set your Gemini API key!")
return
demo_queries = [
"How can AI transform education in developing countries?",
"def fibonacci(n): return n if n
Finally, the main function we run initializes the advanced concept, prints out status messages, and iterates through a set of demonstration queries. When we run each query, we collect and display results from semantic kernel analysis, multi-agent collaboration, and Direct Gemini responses, ensuring a clear and step-by-step presentation of our multi-agent AI workflow.
In summary, we show how Autogen and semantic kernels complement each other to produce a multi-functional, multi-proxy AI system powered by Gemini Flash. We highlight how Autogen simplifies orchestration of different expert agents, while the semantic kernel provides a clean declarative layer to define and invoke advanced AI capabilities. By uniting these tools in COLAB notebooks, we enable rapid experimentation and prototypes of complex AI workflows without sacrificing clarity or control.
Check Code. All credits for this study are to the researchers on the project. Also, please feel free to follow us twitter And don’t forget to join us 100K+ ml reddit And subscribe Our newsletter.
Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.
