AI

Build an intelligent multi-tool AI proxy interface with simplified seamless real-time interaction

In this tutorial, we will build a powerful and interactive simplify Apps that bring together Langchain, the capabilities of Google Gemini API, and advanced tools for creating Smart AI Assistant. Using Streamlit’s intuitive interface, we will create a chat-based system that can search the network in real time, get Wikipedia content, perform calculations, remember key details and process conversation history. Whether itโ€™s a developer, a researcher or just exploring AI, this setup allows us to interact with multi-proxy systems directly from the browser with minimal code and maximum flexibility.

!pip install -q streamlit langchain langchain-google-genai langchain-community
!pip install -q pyngrok python-dotenv wikipedia duckduckgo-search
!npm install -g localtunnel


import streamlit as st
import os
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRun
from langchain.memory import ConversationBufferWindowMemory
from langchain.prompts import PromptTemplate
from langchain.callbacks.streamlit import StreamlitCallbackHandler
from langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapper
import asyncio
import threading
import time
from datetime import datetime
import json

We first install all the necessary Python and Node.js packages required for our AI Assistant application. This includes simplification for the front-end, Langchain for proxy logic, and tools like Wikipedia, DuckDuckgo, and Ngrok/localtunnel for external search and hosting. Once set up, we import all the modules to start building our interactive multi-tool AI agent.

GOOGLE_API_KEY = "Use Your API Key Here" 
NGROK_AUTH_TOKEN = "Use Your Auth Token Here" 
os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY

Next, we configure the environment by setting up the Google Gemini API key and NGROK authentication token. We assign these credentials to variables and set Google_api_key so that the Langchain agent can safely access the Gemini model during execution.

class InnovativeAgentTools:
   """Advanced tool collection for the multi-agent system"""
  
   @staticmethod
   def get_calculator_tool():
       def calculate(expression: str) -> str:
           """Calculate mathematical expressions safely"""
           try:
               allowed_chars = set('0123456789+-*/.() ')
               if all(c in allowed_chars for c in expression):
                   result = eval(expression)
                   return f"Result: {result}"
               else:
                   return "Error: Invalid mathematical expression"
           except Exception as e:
               return f"Calculation error: {str(e)}"
      
       return Tool(
           name="Calculator",
           func=calculate,
           description="Calculate mathematical expressions. Input should be a valid math expression."
       )
  
   @staticmethod
   def get_memory_tool(memory_store):
       def save_memory(key_value: str) -> str:
           """Save information to memory"""
           try:
               key, value = key_value.split(":", 1)
               memory_store[key.strip()] = value.strip()
               return f"Saved '{key.strip()}' to memory"
           except:
               return "Error: Use format 'key: value'"
      
       def recall_memory(key: str) -> str:
           """Recall information from memory"""
           return memory_store.get(key.strip(), f"No memory found for '{key}'")
      
       return [
           Tool(name="SaveMemory", func=save_memory,
                description="Save information to memory. Format: 'key: value'"),
           Tool(name="RecallMemory", func=recall_memory,
                description="Recall saved information. Input: key to recall")
       ]
  
   @staticmethod
   def get_datetime_tool():
       def get_current_datetime(format_type: str = "full") -> str:
           """Get current date and time"""
           now = datetime.now()
           if format_type == "date":
               return now.strftime("%Y-%m-%d")
           elif format_type == "time":
               return now.strftime("%H:%M:%S")
           else:
               return now.strftime("%Y-%m-%d %H:%M:%S")
      
       return Tool(
           name="DateTime",
           func=get_current_datetime,
           description="Get current date/time. Options: 'date', 'time', or 'full'"
       )

Here we define the InnovativeAggentools class to give our AI agent specialized functionality. We implement tools such as calculators for secure expression evaluation, memory tools to save and recall information across rounds and date and time tools to get the current date and time. These tools enable our simplified AI agents to reason, remember and react like real assistants. Check The complete notebook is here

class MultiAgentSystem:
   """Innovative multi-agent system with specialized capabilities"""
  
   def __init__(self, api_key: str):
       self.llm = ChatGoogleGenerativeAI(
           model="gemini-pro",
           google_api_key=api_key,
           temperature=0.7,
           convert_system_message_to_human=True
       )
       self.memory_store = {}
       self.conversation_memory = ConversationBufferWindowMemory(
           memory_key="chat_history",
           k=10,
           return_messages=True
       )
       self.tools = self._initialize_tools()
       self.agent = self._create_agent()
  
   def _initialize_tools(self):
       """Initialize all available tools"""
       tools = []
      
       tools.extend([
           DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),
           WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
       ])
      
       tools.append(InnovativeAgentTools.get_calculator_tool())
       tools.append(InnovativeAgentTools.get_datetime_tool())
       tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))
      
       return tools
  
   def _create_agent(self):
       """Create the ReAct agent with advanced prompt"""
       prompt = PromptTemplate.from_template("""
๐Ÿค– You are an advanced AI assistant with access to multiple tools and persistent memory.


AVAILABLE TOOLS:
{tools}


TOOL USAGE FORMAT:
- Think step by step about what you need to do
- Use Action: tool_name
- Use Action Input: your input
- Wait for Observation
- Continue until you have a final answer


MEMORY CAPABILITIES:
- You can save important information using SaveMemory
- You can recall previous information using RecallMemory
- Always try to remember user preferences and context


CONVERSATION HISTORY:
{chat_history}


CURRENT QUESTION: {input}


REASONING PROCESS:
{agent_scratchpad}


Begin your response with your thought process, then take action if needed.
""")
      
       agent = create_react_agent(self.llm, self.tools, prompt)
       return AgentExecutor(
           agent=agent,
           tools=self.tools,
           memory=self.conversation_memory,
           verbose=True,
           handle_parsing_errors=True,
           max_iterations=5
       )
  
   def chat(self, message: str, callback_handler=None):
       """Process user message and return response"""
       try:
           if callback_handler:
               response = self.agent.invoke(
                   {"input": message},
                   {"callbacks": [callback_handler]}
               )
           else:
               response = self.agent.invoke({"input": message})
           return response["output"]
       except Exception as e:
           return f"Error processing request: {str(e)}"

In this section, we build the core of the application, namely the multi-element system class. Here we use Langchain to integrate the Gemini Pro model and initialize all the necessary tools including web search, memory and calculator features. We configure the reactive proxy using custom prompts for guidance tools for usage and memory processing. Finally, we define a chat method that allows the agent to process user input, invoke tools if necessary, and generate intelligent, context-aware responses. Check The complete notebook is here

def create_streamlit_app():
   """Create the innovative Streamlit application"""
  
   st.set_page_config(
       page_title="๐Ÿš€ Advanced LangChain Agent with Gemini",
       page_icon="๐Ÿค–",
       layout="wide",
       initial_sidebar_state="expanded"
   )
  
   st.markdown("""
   
   """, unsafe_allow_html=True)
  
   st.markdown("""
   

Powered by LangChain + Gemini API + Streamlit

""", unsafe_allow_html=True) with st.sidebar: st.header("๐Ÿ”ง Configuration") api_key = st.text_input( "๐Ÿ”‘ Google AI API Key", type="password", value=GOOGLE_API_KEY if GOOGLE_API_KEY != "your-gemini-api-key-here" else "", help="Get your API key from ) if not api_key: st.error("Please enter your Google AI API key to continue") st.stop() st.success("โœ… API Key configured") st.header("๐Ÿค– Agent Capabilities") st.markdown(""" - ๐Ÿ” **Web Search** (DuckDuckGo) - ๐Ÿ“š **Wikipedia Lookup** - ๐Ÿงฎ **Mathematical Calculator** - ๐Ÿง  **Persistent Memory** - ๐Ÿ“… **Date & Time** - ๐Ÿ’ฌ **Conversation History** """) if 'agent_system' in st.session_state: st.header("๐Ÿง  Memory Store") memory = st.session_state.agent_system.memory_store if memory: for key, value in memory.items(): st.markdown(f"""

{key}: {value}

""", unsafe_allow_html=True) else: st.info("No memories stored yet") if 'agent_system' not in st.session_state: with st.spinner("๐Ÿ”„ Initializing Advanced Agent System..."): st.session_state.agent_system = MultiAgentSystem(api_key) st.success("โœ… Agent System Ready!") st.header("๐Ÿ’ฌ Interactive Chat") if 'messages' not in st.session_state: st.session_state.messages = [{ "role": "assistant", "content": """๐Ÿค– Hello! I'm your advanced AI assistant powered by Gemini. I can: โ€ข Search the web and Wikipedia for information โ€ข Perform mathematical calculations โ€ข Remember important information across our conversation โ€ข Provide current date and time โ€ข Maintain conversation context Try asking me something like: - "Calculate 15 * 8 + 32" - "Search for recent news about AI" - "Remember that my favorite color is blue" - "What's the current time?" """ }] for message in st.session_state.messages: with st.chat_message(message["role"]): st.markdown(message["content"]) if prompt := st.chat_input("Ask me anything..."): st.session_state.messages.append({"role": "user", "content": prompt}) with st.chat_message("user"): st.markdown(prompt) with st.chat_message("assistant"): callback_handler = StreamlitCallbackHandler(st.container()) with st.spinner("๐Ÿค” Thinking..."): response = st.session_state.agent_system.chat(prompt, callback_handler) st.markdown(f"""

{response}

""", unsafe_allow_html=True) st.session_state.messages.append({"role": "assistant", "content": response}) st.header("๐Ÿ’ก Example Queries") col1, col2, col3 = st.columns(3) with col1: if st.button("๐Ÿ” Search Example"): example = "Search for the latest developments in quantum computing" st.session_state.example_query = example with col2: if st.button("๐Ÿงฎ Math Example"): example = "Calculate the compound interest on $1000 at 5% for 3 years" st.session_state.example_query = example with col3: if st.button("๐Ÿง  Memory Example"): example = "Remember that I work as a data scientist at TechCorp" st.session_state.example_query = example if 'example_query' in st.session_state: st.info(f"Example query: {st.session_state.example_query}")

In this section, we put everything together by building an interactive web interface using Sparlit. We configure the application layout, define custom CSS styles, and set a sidebar to enter API keys and configure proxy functionality. We initialize the multi-agent system, maintain message history, and enable a chat interface that allows users to interact in real time. To make it easier to explore, we also provide sample buttons for search, math and memory-related queries, all with a beautifully responsive UI. Check The complete notebook is here

def setup_ngrok_auth(auth_token):
   """Setup ngrok authentication"""
   try:
       from pyngrok import ngrok, conf
      
       conf.get_default().auth_token = auth_token
      
       try:
           tunnels = ngrok.get_tunnels()
           print("โœ… Ngrok authentication successful!")
           return True
       except Exception as e:
           print(f"โŒ Ngrok authentication failed: {e}")
           return False
          
   except ImportError:
       print("โŒ pyngrok not installed. Installing...")
       import subprocess
       subprocess.run(['pip', 'install', 'pyngrok'], check=True)
       return setup_ngrok_auth(auth_token)


def get_ngrok_token_instructions():
   """Provide instructions for getting ngrok token"""
   return """
๐Ÿ”ง NGROK AUTHENTICATION SETUP:


1. Sign up for an ngrok account:
  - Visit: 
  - Create a free account


2. Get your authentication token:
  - Go to: 
  - Copy your authtoken


3. Replace 'your-ngrok-auth-token-here' in the code with your actual token


4. Alternative methods if ngrok fails:
  - Use Google Colab's built-in public URL feature
  - Use localtunnel: !npx localtunnel --port 8501
  - Use serveo.net: !ssh -R 80:localhost:8501 serveo.net
"""

Here we set up an accessibility feature to verify NGrok, which allows us to expose local simplified applications to the Internet. We use the Pyngrok library to configure the authentication token and verify the connection. If the token is missing or invalid, we will provide detailed instructions on how to get one and suggest alternative tunneling methods, such as LocalTunnel or Serveo, which allows us to easily host and share our applications from environments like Google Colab.

def main():
   """Main function to run the application"""
   try:
       create_streamlit_app()
   except Exception as e:
       st.error(f"Application error: {str(e)}")
       st.info("Please check your API key and try refreshing the page")

This main() function acts as the entry point for us to simplify the application. We just need to call create_streamlit_app() to start the complete interface. If any issues arise, such as missing API keys or failed tool initialization, we gracefully capture the error and display useful messages and make sure the user knows how to recover and continue using the application.

def run_in_colab():
   """Run the application in Google Colab with proper ngrok setup"""
  
   print("๐Ÿš€ Starting Advanced LangChain Agent Setup...")
  
   if NGROK_AUTH_TOKEN == "your-ngrok-auth-token-here":
       print("โš ๏ธ  NGROK_AUTH_TOKEN not configured!")
       print(get_ngrok_token_instructions())
      
       print("๐Ÿ”„ Attempting alternative tunnel methods...")
       try_alternative_tunnels()
       return
  
   print("๐Ÿ“ฆ Installing required packages...")
   import subprocess
  
   packages = [
       'streamlit',
       'langchain',
       'langchain-google-genai',
       'langchain-community',
       'wikipedia',
       'duckduckgo-search',
       'pyngrok'
   ]
  
   for package in packages:
       try:
           subprocess.run(['pip', 'install', package], check=True, capture_output=True)
           print(f"โœ… {package} installed")
       except subprocess.CalledProcessError:
           print(f"โš ๏ธ  Failed to install {package}")
  
   app_content=""'
import streamlit as st
import os
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool, WikipediaQueryRun, DuckDuckGoSearchRun
from langchain.memory import ConversationBufferWindowMemory
from langchain.prompts import PromptTemplate
from langchain.callbacks.streamlit import StreamlitCallbackHandler
from langchain_community.utilities import WikipediaAPIWrapper, DuckDuckGoSearchAPIWrapper
from datetime import datetime


# Configuration - Replace with your actual keys
GOOGLE_API_KEY = "''' + GOOGLE_API_KEY + '''"
os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY


class InnovativeAgentTools:
   @staticmethod
   def get_calculator_tool():
       def calculate(expression: str) -> str:
           try:
               allowed_chars = set('0123456789+-*/.() ')
               if all(c in allowed_chars for c in expression):
                   result = eval(expression)
                   return f"Result: {result}"
               else:
                   return "Error: Invalid mathematical expression"
           except Exception as e:
               return f"Calculation error: {str(e)}"
      
       return Tool(name="Calculator", func=calculate,
                  description="Calculate mathematical expressions. Input should be a valid math expression.")
  
   @staticmethod
   def get_memory_tool(memory_store):
       def save_memory(key_value: str) -> str:
           try:
               key, value = key_value.split(":", 1)
               memory_store[key.strip()] = value.strip()
               return f"Saved '{key.strip()}' to memory"
           except:
               return "Error: Use format 'key: value'"
      
       def recall_memory(key: str) -> str:
           return memory_store.get(key.strip(), f"No memory found for '{key}'")
      
       return [
           Tool(name="SaveMemory", func=save_memory, description="Save information to memory. Format: 'key: value'"),
           Tool(name="RecallMemory", func=recall_memory, description="Recall saved information. Input: key to recall")
       ]
  
   @staticmethod
   def get_datetime_tool():
       def get_current_datetime(format_type: str = "full") -> str:
           now = datetime.now()
           if format_type == "date":
               return now.strftime("%Y-%m-%d")
           elif format_type == "time":
               return now.strftime("%H:%M:%S")
           else:
               return now.strftime("%Y-%m-%d %H:%M:%S")
      
       return Tool(name="DateTime", func=get_current_datetime,
                  description="Get current date/time. Options: 'date', 'time', or 'full'")


class MultiAgentSystem:
   def __init__(self, api_key: str):
       self.llm = ChatGoogleGenerativeAI(
           model="gemini-pro",
           google_api_key=api_key,
           temperature=0.7,
           convert_system_message_to_human=True
       )
       self.memory_store = {}
       self.conversation_memory = ConversationBufferWindowMemory(
           memory_key="chat_history", k=10, return_messages=True
       )
       self.tools = self._initialize_tools()
       self.agent = self._create_agent()
  
   def _initialize_tools(self):
       tools = []
       try:
           tools.extend([
               DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper()),
               WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
           ])
       except Exception as e:
           st.warning(f"Search tools may have limited functionality: {e}")
      
       tools.append(InnovativeAgentTools.get_calculator_tool())
       tools.append(InnovativeAgentTools.get_datetime_tool())
       tools.extend(InnovativeAgentTools.get_memory_tool(self.memory_store))
       return tools
  
   def _create_agent(self):
       prompt = PromptTemplate.from_template("""
๐Ÿค– You are an advanced AI assistant with access to multiple tools and persistent memory.


AVAILABLE TOOLS:
{tools}


TOOL USAGE FORMAT:
- Think step by step about what you need to do
- Use Action: tool_name
- Use Action Input: your input
- Wait for Observation
- Continue until you have a final answer


CONVERSATION HISTORY:
{chat_history}


CURRENT QUESTION: {input}


REASONING PROCESS:
{agent_scratchpad}


Begin your response with your thought process, then take action if needed.
""")
      
       agent = create_react_agent(self.llm, self.tools, prompt)
       return AgentExecutor(agent=agent, tools=self.tools, memory=self.conversation_memory,
                          verbose=True, handle_parsing_errors=True, max_iterations=5)
  
   def chat(self, message: str, callback_handler=None):
       try:
           if callback_handler:
               response = self.agent.invoke({"input": message}, {"callbacks": [callback_handler]})
           else:
               response = self.agent.invoke({"input": message})
           return response["output"]
       except Exception as e:
           return f"Error processing request: {str(e)}"


# Streamlit App
st.set_page_config(page_title="๐Ÿš€ Advanced LangChain Agent", page_icon="๐Ÿค–", layout="wide")


st.markdown("""

""", unsafe_allow_html=True)


st.markdown('

Powered by LangChain + Gemini API

', unsafe_allow_html=True) with st.sidebar: st.header("๐Ÿ”ง Configuration") api_key = st.text_input("๐Ÿ”‘ Google AI API Key", type="password", value=GOOGLE_API_KEY) if not api_key: st.error("Please enter your Google AI API key") st.stop() st.success("โœ… API Key configured") st.header("๐Ÿค– Agent Capabilities") st.markdown("- ๐Ÿ” Web Search\n- ๐Ÿ“š Wikipedia\n- ๐Ÿงฎ Calculator\n- ๐Ÿง  Memory\n- ๐Ÿ“… Date/Time") if 'agent_system' in st.session_state and st.session_state.agent_system.memory_store: st.header("๐Ÿง  Memory Store") for key, value in st.session_state.agent_system.memory_store.items(): st.markdown(f'

{key}: {value}

', unsafe_allow_html=True) if 'agent_system' not in st.session_state: with st.spinner("๐Ÿ”„ Initializing Agent..."): st.session_state.agent_system = MultiAgentSystem(api_key) st.success("โœ… Agent Ready!") if 'messages' not in st.session_state: st.session_state.messages = [{ "role": "assistant", "content": "๐Ÿค– Hello! I'm your advanced AI assistant. I can search, calculate, remember information, and more! Try asking me to: calculate something, search for information, or remember a fact about you." }] for message in st.session_state.messages: with st.chat_message(message["role"]): st.markdown(message["content"]) if prompt := st.chat_input("Ask me anything..."): st.session_state.messages.append({"role": "user", "content": prompt}) with st.chat_message("user"): st.markdown(prompt) with st.chat_message("assistant"): callback_handler = StreamlitCallbackHandler(st.container()) with st.spinner("๐Ÿค” Thinking..."): response = st.session_state.agent_system.chat(prompt, callback_handler) st.markdown(f'

{response}

', unsafe_allow_html=True) st.session_state.messages.append({"role": "assistant", "content": response}) # Example buttons st.header("๐Ÿ’ก Try These Examples") col1, col2, col3 = st.columns(3) with col1: if st.button("๐Ÿงฎ Calculate 15 * 8 + 32"): st.rerun() with col2: if st.button("๐Ÿ” Search AI news"): st.rerun() with col3: if st.button("๐Ÿง  Remember my name is Alex"): st.rerun() ''' with open('streamlit_app.py', 'w') as f: f.write(app_content) print("โœ… Streamlit app file created successfully!") if setup_ngrok_auth(NGROK_AUTH_TOKEN): start_streamlit_with_ngrok() else: print("โŒ Ngrok authentication failed. Trying alternative methods...") try_alternative_tunnels()

In the run_in_colab() function, we make it easy to deploy simplified applications directly from Google COLAB environment. We first install all the required packages, then dynamically generate and write the complete simplified application code to the stramlit_app.py file. We verify the existence of valid NGrok tokens to enable COLAB’s public access to the application and if it is missing or invalid we will guide ourselves through the backup tunnel selection. This setup allows us to interact with our AI proxy from anywhere, in a few cells in Colab. Check The complete notebook is here

def start_streamlit_with_ngrok():
   """Start Streamlit with ngrok tunnel"""
   import subprocess
   import threading
   from pyngrok import ngrok
  
   def start_streamlit():
       subprocess.run(['streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'])
  
   print("๐Ÿš€ Starting Streamlit server...")
   thread = threading.Thread(target=start_streamlit)
   thread.daemon = True
   thread.start()
  
   time.sleep(5)
  
   try:
       print("๐ŸŒ Creating ngrok tunnel...")
       public_url = ngrok.connect(8501)
       print(f"๐Ÿ”— SUCCESS! Access your app at: {public_url}")
       print("โœจ Your Advanced LangChain Agent is now running publicly!")
       print("๐Ÿ“ฑ You can share this URL with others!")
      
       print("โณ Keeping tunnel alive... Press Ctrl+C to stop")
       try:
           ngrok_process = ngrok.get_ngrok_process()
           ngrok_process.proc.wait()
       except KeyboardInterrupt:
           print("๐Ÿ‘‹ Shutting down...")
           ngrok.kill()
          
   except Exception as e:
       print(f"โŒ Ngrok tunnel failed: {e}")
       try_alternative_tunnels()


def try_alternative_tunnels():
   """Try alternative tunneling methods"""
   print("๐Ÿ”„ Trying alternative tunnel methods...")
  
   import subprocess
   import threading
  
   def start_streamlit():
       subprocess.run(['streamlit', 'run', 'streamlit_app.py', '--server.port=8501', '--server.headless=true'])
  
   thread = threading.Thread(target=start_streamlit)
   thread.daemon = True
   thread.start()
  
   time.sleep(3)
  
   print("๐ŸŒ Streamlit is running on 
   print("n๐Ÿ“‹ ALTERNATIVE TUNNEL OPTIONS:")
   print("1. localtunnel: Run this in a new cell:")
   print("   !npx localtunnel --port 8501")
   print("n2. serveo.net: Run this in a new cell:")
   print("   !ssh -R 80:localhost:8501 serveo.net")
   print("n3. Colab public URL (if available):")
   print("   Use the 'Public URL' button in Colab's interface")
  
   try:
       while True:
           time.sleep(60)
   except KeyboardInterrupt:
       print("๐Ÿ‘‹ Shutting down...")


if __name__ == "__main__":
   try:
       get_ipython()
       print("๐Ÿš€ Google Colab detected - starting setup...")
       run_in_colab()
   except NameError:
       main()

In the last section, we set up the execution logic to run the application on the local environment or inside Google Colab. The start_streamlit_with_ngrok() function starts the simplified server in the background and uses Ngrok to publicly expose it for ease of access and sharing. If Ngrok fails, the try_alternative_tunnels() function will be activated with alternative tunneling options such as localtunnel and serveo. With the __ -Main __ block, we automatically detect if we are in Colab and initiate the appropriate settings, making the entire deployment process smooth, flexible and shareable from anywhere.

All in all, we will have a fully functional AI proxy that runs in a smooth, smooth interface, able to answer queries, remember user input, and even use NGrok to publicly share its services. We’ve seen how easy simplification allows us to integrate advanced AI capabilities into engaging and user-friendly applications. From here we can extend the proxy’s tools, plug it into larger workflows or deploy as part of a smart application. Thanks to streamlining for front-end and Lanchain proxy, we have laid a solid foundation for the next generation of Interactive AI experience.


Check The complete notebook is here. All credits for this study are to the researchers on the project. Also, please feel free to follow us twitter And don’t forget to join us 100K+ ml reddit And subscribe Our newsletter.


Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button