AI

Automatic proxy: A fully automated and highly self-developed framework that allows users to create and deploy LLM proxy only in natural language


From business processes to scientific research, AI agents can handle huge data sets, simplify processes and help decision-making. However, even with all these developments, construction and tailoring LLM agents are still a difficult task for most users. The main reason is that AI proxy platforms require programming skills, limiting access to a portion of the population. Only 0.03% of the world population has the necessary coding skills, and the large-scale deployment of LLM agents is the scope of non-technical users. Although AI is increasingly becoming an important tool in different industries, non-programming professionals are unable to capitalize on their full potential, and there is a huge gap between technical capabilities and availability. One of the biggest problems in AI agent development is the dependence on programming skills.

Existing systems such as Langchain and Autogen are specifically designed for developers with programming experience, which complicates the design or tailoring of AI agents for non-technical personnel. This barrier can slow down the use of AI automation in people, because most professionals do not have the technical capabilities required by the application. Despite well-documented tools, creating AI agents often requires complex and timely engineering, API integration and debugging, which makes it impossible for a wider audience. The problem is to create a system that does not require coding but still provides users with flexible and powerful AI power automation.

The current frameworks play a role primarily in a developer-oriented environment, requiring in-depth programming expertise. For example, Langchain is highly used for LLM application creation, but requires a priori understanding of API calls and structured data processing. Other options, such as Autogen and Camel, enhance LLM functionality by allowing agents to interact with each other according to their roles. However, they also depend on technical setups that may be difficult for non-technical users to implement. Although tools make AI automation better, in most cases they are still inaccessible to non-coded users. The lack of a true zero-code solution is limited to the influence of AI, thus preventing widespread adoption by non-developers.

Researchers from the University of Hong Kong introduced Automatic proxya fully automatic and zero code AI proxy framework designed to bridge this gap. AutoAgent enables users to create and deploy LLM agents using natural language commands, eliminating the need for programming expertise. Unlike existing solutions, automatic proxy is a self-developed proxy operating system, where users describe tasks in simple language and generate proxy and workflows independently. The framework includes four key components: Agent System Utility, LLM-capable engine, self-management file system, and self-play proxy custom module. These components allow users to create AI-driven solutions for various applications without writing single lines of code. AutoAgent aims to democratize artificial intelligence development and enable smart automation to be accessible to a wider audience.

The automatic framework runs through an advanced multi-agent architecture. LLM-driven operational engines transform natural language instructions into structured workflows at their core. Unlike the conventional framework that requires manual coding, automatic agents dynamically build AI agents based on user input. Self-management file systems enable efficient data processing by automatically converting various file formats into searchable knowledge bases. This ensures that the AI ​​agent can retrieve relevant information from multiple sources. The self-play agent custom module further enhances system adaptability through iterative optimization of agent functions. These components allow automatic proxying to perform complex AI-driven tasks without manual intervention. This approach greatly reduces the complexity of AI agent development and allows non-programmers to use high efficiency.

Performance evaluation of automatic agents shows significant improvements to existing frameworks. It earned the second highest ranking in the Gaia benchmark, a rigorous evaluation of general AI assistants with an overall accuracy of 55.15%. In Level 1 tasks, the accuracy of automatic agents reached 71.7%, surpassing leading open source frameworks such as Langfun agents (60.38%) and Fridays (45.28%). The effectiveness of this system in retrieving enhanced power generation (RAG) is also evident. In the multi-hop pane benchmark, the automatic agent achieved accuracy of 73.51%, outperforming the rag implementation of Ramban (62.83%) while maintaining a significantly lower error rate of 14.2%. Automatic proxy shows excellent adaptability in complex multi-agent tasks, and models in structured problem solving such as Magentic-1 and Omne perform better than models.

The research on automated agents has proposed several key gains, highlighting its impact and advances in AI automation:

  1. AutoAgent eliminates the need for programming expertise, enabling users to create and deploy LLM agents using natural language commands.
  2. Automatic agents ranked second in Gaia, earning 71.7% accuracy in Level 1 tasks and outperformed several existing frameworks.
  3. Automatic proxy achieves 73.51% accuracy on multi-hop rag benchmarks, demonstrating improved retrieval and reasoning capabilities.
  4. The system dynamically generates workflows and coordinates AI agents to solve problems more effectively in complex tasks.
  5. AutoAgent successfully automates financial analysis, document management and other real-life applications, demonstrating its versatility.
  6. By giving non-technical users access to the creation of LLM agents, AutoAgent significantly expands the availability of AI, rather than software engineers and researchers.
  7. Self-management file systems allow seamless data integration, ensuring that AI agents can retrieve and process information efficiently.
  8. The self-toy custom module optimizes agent performance through iterative learning, thereby reducing manual intervention.

Check Paper and github pages. All credits for this study are to the researchers on the project. Also, please stay tuned for us twitter And don’t forget to join us 80k+ ml subcolumn count.

🚨 Recommended Reading – LG AI Research Unleashes Nexus: An Advanced System Integration Agent AI Systems and Data Compliance Standards to Address Legal Issues in AI Datasets

Automatic proxy release: A fully automated and highly self-developed framework that enables users to create and deploy LLM proxy only in natural language, which first appeared on Marktechpost.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button