How to build an advanced proxy search generation (RAG) system using dynamic strategies and intelligent search?

In this tutorial, we introduce the implementation of the agent search-based power generation (RAG) system. We design it so that agents do more than just search files; it actively determines when searches are needed, selects the best search strategy, and responds comprehensively through context awareness. By combining embedding, faiss indexing and LL.B.A. in LL., we have created a practical proof of how to elevate standard rag pipes to something more adaptable and smart. Check The complete code is here.

import numpy as np
import faiss
from sentence_transformers import SentenceTransformer
import json
import re
from typing import List, Dict, Any, Optional
from dataclasses import dataclass
from enum import Enum


class MockLLM:
   def generate(self, prompt: str, max_tokens: int = 150) -> str:
       prompt_lower = prompt.lower()
      
       if "decide whether to retrieve" in prompt_lower:
           if any(word in prompt_lower for word in ["specific", "recent", "data", "facts", "when", "who", "what"]):
               return "RETRIEVE: The query requires specific factual information that needs to be retrieved."
           else:
               return "NO_RETRIEVE: This is a general question that can be answered with existing knowledge."
      
       elif "choose retrieval strategy" in prompt_lower:
           if "comparison" in prompt_lower or "versus" in prompt_lower:
               return "STRATEGY: multi_query - Need to retrieve information about multiple entities for comparison."
           elif "recent" in prompt_lower or "latest" in prompt_lower:
               return "STRATEGY: temporal - Focus on recent information."
           else:
               return "STRATEGY: semantic - Standard semantic similarity search."
      
       elif "synthesize" in prompt_lower and "context:" in prompt_lower:
           return "Based on the retrieved information, here's a comprehensive answer that combines multiple sources and provides specific details with proper context."
      
       return "This is a mock response. In practice, use a real LLM like OpenAI's GPT or similar."


class RetrievalStrategy(Enum):
   SEMANTIC = "semantic"
   MULTI_QUERY = "multi_query"
   TEMPORAL = "temporal"
   HYBRID = "hybrid"


@dataclass
class Document:
   id: str
   content: str
   metadata: Dict[str, Any]
   embedding: Optional[np.ndarray] = None

We have established the foundation for the agent rag system. We create search policy enumerations and design document dataclasts for simulation decisions so that we can effectively construct and manage our knowledge base. Check The complete code is here.

class AgenticRAGSystem:
   def __init__(self, model_name: str = "all-MiniLM-L6-v2"):
       self.encoder = SentenceTransformer(model_name)
       self.llm = MockLLM()
       self.documents: List[Document] = []
       self.index: Optional[faiss.Index] = None
      
   def add_documents(self, documents: List[Dict[str, Any]]) -> None:
       print(f"Processing {len(documents)} documents...")
      
       for i, doc in enumerate(documents):
           doc_obj = Document(
               id=doc.get('id', str(i)),
               content=doc['content'],
               metadata=doc.get('metadata', {})
           )
           self.documents.append(doc_obj)
      
       contents = [doc.content for doc in self.documents]
       embeddings = self.encoder.encode(contents, show_progress_bar=True)
      
       for doc, embedding in zip(self.documents, embeddings):
           doc.embedding = embedding
      
       dimension = embeddings.shape[1]
       self.index = faiss.IndexFlatIP(dimension)
      
       faiss.normalize_L2(embeddings)
       self.index.add(embeddings.astype('float32'))
      
       print(f"Knowledge base built with {len(self.documents)} documents")

We have built the core of the proxy rag system. We initialize the embedding model, set up the FAISS index, and add documents by encoding its contents into vectors, allowing for fast and accurate semantic retrieval from our knowledge base. Check The complete code is here.

 def decide_retrieval(self, query: str) -> bool:
       decision_prompt = f"""
       Analyze the following query and decide whether to retrieve information:
       Query: "{query}"
      
       Decide whether to retrieve information from the knowledge base.
       Consider if this needs specific facts, recent data, or can be answered generally.
      
       Respond with either:
       RETRIEVE: [reason] or NO_RETRIEVE: [reason]
       """
      
       response = self.llm.generate(decision_prompt)
       should_retrieve = response.startswith("RETRIEVE:")
      
       print(f"🤖 Agent Decision: {'Retrieve' if should_retrieve else 'Direct Answer'}")
       print(f"   Reasoning: {response.split(':', 1)[1].strip() if ':' in response else response}")
      
       return should_retrieve
  
   def choose_strategy(self, query: str) -> RetrievalStrategy:
       strategy_prompt = f"""
       Choose the best retrieval strategy for this query:
       Query: "{query}"
      
       Available strategies:
       - semantic: Standard similarity search
       - multi_query: Multiple related queries (for comparisons)
       - temporal: Focus on recent information
       - hybrid: Combination approach
      
       Choose retrieval strategy and explain why.
       Respond with: STRATEGY: [strategy_name] - [reasoning]
       """
      
       response = self.llm.generate(strategy_prompt)
      
       if "multi_query" in response.lower():
           strategy = RetrievalStrategy.MULTI_QUERY
       elif "temporal" in response.lower():
           strategy = RetrievalStrategy.TEMPORAL
       elif "hybrid" in response.lower():
           strategy = RetrievalStrategy.HYBRID
       else:
           strategy = RetrievalStrategy.SEMANTIC
      
       print(f"🎯 Retrieval Strategy: {strategy.value}")
       print(f"   Reasoning: {response.split('-', 1)[1].strip() if '-' in response else response}")
      
       return strategy

We enable our agents to think before they get it. We first determine if the query really needs to be retrieved, and then choose the most appropriate strategy: semantics, multi-legacy, time or hybrid. This allows us to locate the correct context with clear, printed reasoning at each step. Check The complete code is here.

  def retrieve_documents(self, query: str, strategy: RetrievalStrategy, k: int = 3) -> List[Document]:
       if not self.index:
           print("❌ No knowledge base available")
           return []
      
       if strategy == RetrievalStrategy.MULTI_QUERY:
           queries = [query, f"advantages of {query}", f"disadvantages of {query}"]
           all_docs = []
           for q in queries:
               docs = self._semantic_search(q, k=2)
               all_docs.extend(docs)
           seen_ids = set()
           unique_docs = []
           for doc in all_docs:
               if doc.id not in seen_ids:
                   unique_docs.append(doc)
                   seen_ids.add(doc.id)
           return unique_docs[:k]
      
       elif strategy == RetrievalStrategy.TEMPORAL:
           docs = self._semantic_search(query, k=k*2)
           docs_with_dates = [(doc, doc.metadata.get('date', '1900-01-01')) for doc in docs]
           docs_with_dates.sort(key=lambda x: x[1], reverse=True)
           return [doc for doc, _ in docs_with_dates[:k]]
      
       else:
           return self._semantic_search(query, k=k)
  
   def _semantic_search(self, query: str, k: int) -> List[Document]:
       query_embedding = self.encoder.encode([query])
       faiss.normalize_L2(query_embedding)
      
       scores, indices = self.index.search(query_embedding.astype('float32'), k)
      
       results = []
       for score, idx in zip(scores[0], indices[0]):
           if idx  str:
       if not retrieved_docs:
           return self.llm.generate(f"Answer this query: {query}")
      
       context = "nn".join([f"Document {i+1}: {doc.content}"
                             for i, doc in enumerate(retrieved_docs)])
      
       synthesis_prompt = f"""
       Query: {query}
      
       Context: {context}
      
       Synthesize a comprehensive answer using the provided context.
       Be specific and reference the information sources when relevant.
       """
      
       return self.llm.generate(synthesis_prompt, max_tokens=200)

We implement ways to actually acquire and use knowledge. We perform semantic searches when needed, branching into multiple Query or time-rearrangements, display the results repeatedly, and then synthesize a concentrated answer from the retrieved context. In the process, we maintain efficient, transparent and tight retrieval. Check The complete code is here.

   def query(self, query: str) -> str:
       print(f"n🔍 Processing Query: '{query}'")
       print("=" * 50)
      
       if not self.decide_retrieval(query):
           print("n📝 Generating direct response...")
           return self.llm.generate(f"Answer this query: {query}")
      
       strategy = self.choose_strategy(query)
      
       print(f"n📚 Retrieving documents using {strategy.value} strategy...")
       retrieved_docs = self.retrieve_documents(query, strategy)
       print(f"   Retrieved {len(retrieved_docs)} documents")
      
       print("n🧠 Synthesizing response...")
       response = self.synthesize_response(query, retrieved_docs)
      
       if retrieved_docs:
           print("n📄 Retrieved Context:")
           for i, doc in enumerate(retrieved_docs[:2], 1):
               print(f"   {i}. {doc.content[:100]}...")
      
       return response

We put all the parts in a single pipe. When we run the query, we first determine whether we need to retrieve it, then select the appropriate policy, get the document accordingly, and finally synthesize the response, while also displaying the retrieved transparent context. This makes the system feel more stable and interpretable. Check The complete code is here.

def create_sample_knowledge_base():
   return [
       {
           "id": "ai_1",
           "content": "Artificial Intelligence (AI) refers to computer systems that can perform tasks requiring human intelligence",
           "metadata": {"topic": "AI basics", "date": "2024-01-15"}
       },
       {
           "id": "ml_1",
           "content": "ML is a subset of AI.",
           "metadata": {"topic": "Machine Learning", "date": "2024-02-10"}
       },
       {
           "id": "rag_1",
           "content": "Retrieval-Augmented Generation (RAG) combines the power of large language models with external knowledge retrieval to provide more accurate and up-to-date responses.",
           "metadata": {"topic": "RAG", "date": "2024-03-05"}
       },
       {
           "id": "agents_1",
           "content": "AI agents",
           "metadata": {"topic": "AI Agents", "date": "2024-03-20"}
       }
   ]


if __name__ == "__main__":
   print("🚀 Initializing Agentic RAG System...")
  
   rag_system = AgenticRAGSystem()
  
   docs = create_sample_knowledge_base()
   rag_system.add_documents(docs)
  
   demo_queries = [
       "What is artificial intelligence?",
       "How are you today?",
       "Compare AI and Machine Learning",
   ]
  
   for query in demo_queries:
       response = rag_system.query(query)
       print(f"n💬 Final Response: {response}")
       print("n" + "="*80)
  
   print("n✅ Agentic RAG Tutorial Complete!")
   print("nKey Features Demonstrated:")
   print("• Agent-driven retrieval decisions")
   print("• Dynamic strategy selection")
   print("• Multi-modal retrieval approaches")
   print("• Transparent reasoning process")

We wrap everything in a runnable demo. We created a small knowledge base for AI-related documents, initialize the proxy rag system, and run sample queries to highlight various behaviors, including retrieval, direct replies, and comparisons. This final block connects the entire tutorial and shows the agent’s reasoning in action.

In summary, we see how agent-driven retrieval decisions, dynamic policy selection, and transparent reasoning form the workflow of advanced agent rags. Now we have a working system that highlights the potential to add agents to the rag, making information retrieval smarter, more targeted, and more human-like adaptability. This foundation allows us to use real LLM, a larger knowledge base and more complex strategy scaling systems in future iterations.


Check The complete code is here. Check out ours anytime Tutorials, codes and notebooks for github pages. Also, please stay tuned for us twitter And don’t forget to join us 100K+ ml reddit And subscribe Our newsletter.


Asif Razzaq is CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, ASIF is committed to harnessing the potential of artificial intelligence to achieve social benefits. His recent effort is to launch Marktechpost, an artificial intelligence media platform that has an in-depth coverage of machine learning and deep learning news that can sound both technically, both through technical voices and be understood by a wide audience. The platform has over 2 million views per month, demonstrating its popularity among its audience.

🔥[Recommended Read] NVIDIA AI Open Source VIPE (Video Pose Engine): A powerful and universal 3D video annotation tool for spatial AI

You may also like...