0

Implementing self-refine technology using large language model LLM

This tutorial demonstrates how to implement self-refine technology using Mirascope’s Big Language Model (LLM), a powerful framework for building structured and timely workflows. Self-refine is a rapid engineering strategy that evaluates its own output, generates feedback, and iterates its response based on that feedback. This refinement cycle can be repeated multiple times to gradually improve the quality and accuracy of the final answer.

The self-refine method is particularly effective for tasks involving inference, code generation, and content creation, where gradual improvements lead to significantly better results. Check The complete code is here

Install dependencies

!pip install "mirascope[openai]"

OpenAI API Keys

To obtain the OpenAI API key, access and generate a new key. If you are a new user, you may need to add billing details and pay a minimum of $5 to activate API access. Check The complete code is here

import os
from getpass import getpass
os.environ["OPENAI_API_KEY"] = getpass('Enter OpenAI API Key: ')

Basic self-reimplementation

We first implement the self-refine technology using Mirascope’s @OpenAi.call and @Prompt_Template Decorator. The process begins with generating an initial response to a user query. This response is then evaluated through the model itself, which itself provides constructive feedback. Finally, the model uses this feedback to generate improved responses. The SEXT_REFINE feature enables us to repeat the improvement process for a specified number of iterations, thereby enhancing the output quality for each cycle. Check The complete code is here

from mirascope.core import openai, prompt_template
from mirascope.core.openai import OpenAICallResponse


@openai.call(model="gpt-4o-mini")
def call(query: str) -> str:
    return query


@openai.call(model="gpt-4o-mini")
@prompt_template(
    """
    Here is a query and a response to the query. Give feedback about the answer,
    noting what was correct and incorrect.
    Query:
    {query}
    Response:
    {response}
    """
)
def evaluate_response(query: str, response: OpenAICallResponse): ...


@openai.call(model="gpt-4o-mini")
@prompt_template(
    """
    For this query:
    {query}
    The following response was given:
    {response}
    Here is some feedback about the response:
    {feedback}

    Consider the feedback to generate a new response to the query.
    """
)
def generate_new_response(
    query: str, response: OpenAICallResponse
) -> openai.OpenAIDynamicConfig:
    feedback = evaluate_response(query, response)
    return {"computed_fields": {"feedback": feedback}}


def self_refine(query: str, depth: int) -> str:
    response = call(query)
    for _ in range(depth):
        response = generate_new_response(query, response)
    return response.content


query = "A train travels 120 km at a certain speed. If the speed had been 20 km/h faster, it would have taken 30 minutes less to cover the same distance. What was the original speed of the train?"

print(self_refine(query, 1))

Self-reforecast enhanced by responsive model

In this enhanced version, we define structured response model math using pydantic to capture solution steps and final numerical answers. The Enhanced_Generate_New_Response function perfects the output by combining model-generated feedback and formatting the improved response into well-defined patterns. This approach ensures clarity, consistency and better downstream availability of refined answers, especially for mathematical problem-solving tasks. Check The complete code is here

from pydantic import BaseModel, Field


class MathSolution(BaseModel):
    steps: list[str] = Field(..., description="The steps taken to solve the problem")
    final_answer: float = Field(..., description="The final numerical answer")


@openai.call(model="gpt-4o-mini", response_model=MathSolution)
@prompt_template(
    """
    For this query:
    {query}
    The following response was given:
    {response}
    Here is some feedback about the response:
    {feedback}

    Consider the feedback to generate a new response to the query.
    Provide the solution steps and the final numerical answer.
    """
)
def enhanced_generate_new_response(
    query: str, response: OpenAICallResponse
) -> openai.OpenAIDynamicConfig:
    feedback = evaluate_response(query, response)
    return {"computed_fields": {"feedback": feedback}}


def enhanced_self_refine(query: str, depth: int) -> MathSolution:
    response = call(query)
    for _ in range(depth):
        solution = enhanced_generate_new_response(query, response)
        response = f"Steps: {solution.steps}nFinal Answer: {solution.final_answer}"
    return solution


# Example usage
result = enhanced_self_refine(query, 1)
print(result)

It turns out that enhanced self-refine technology works in accurately solving a given mathematical problem:

“The train travels 120 kilometers at a certain speed. If the speed is 20 km/h faster, the speed covering the same distance is 30 minutes less. What is the original speed of the train?”

Through a refined iteration, the model logically sounds and gradual derivation, resulting in a correct answer of 60 km/h. This illustrates several key benefits of the self-refine method:

  • Accuracy is improved through iterative feedback-driven enhancement.
  • Clearer reasoning steps including variable settings, equation formulas and quadratic solutions applications.
  • Higher transparency makes it easier for users to understand and trust solutions.

In a wider range of applications, the technology has great hope for tasks that require accuracy, structural and iterative improvements – from technical problem solving to creative and professional writing. However, implementers should keep in mind the trade-offs of calculating costs and fine-tune depth and feedback tips to match their specific use cases.


Check The complete code is here. All credits for this study are to the researchers on the project. Also, please stay tuned for us twitter And don’t forget to join us 100K+ ml reddit And subscribe Our newsletter.

FAQ: Can Marktechpost help me promote my AI products and position them in front of AI developers and data engineers?

ANS: Yes, Marktechpost can help promote your AI products by publishing sponsored articles, case studies, or product features to target a global audience of AI developers and data engineers. The MTP platform is widely read by tech professionals, improving the visibility and location of your products in the AI community. [SET UP A CALL]


I am a civil engineering graduate in Islamic Islam in Jamia Milia New Delhi (2022) and I am very interested in data science, especially neural networks and their applications in various fields.