AI

Quote: Can human new functions solve the problem of AI trust?

For some time, AI verification has been a serious problem. Although the large language model (LLM) has improved at an incredible speed, the challenges that have proven to be accurate have not yet been resolved.

Anthropic tries to solve this problem, and in all large AI companies, I think they have the best shooting effect.

The company released a reference, which is the new API function of its Claude model, which can change the AI ​​system to verify its response. The technology will automatically decompose the source document into a digestable block and link each AI-generated statement to its original source-similar to that of academic papers quoting its reference.

Quoting one of the most long -lasting challenges that try to solve AI: The content of the generated is accurate and trustworthy. The system does not need to perform complicated timely engineering or manual verification, but automatically handle documents and provide a sentence -level source verification for each claims proposed.

The data shows the encouraging results: compared with traditional methods, the accuracy of the citation has increased by 15 %.

Why is it important now

The AI ​​trust has become a key obstacle to enterprise (and personal use). As the organization surpasses the use of the experiment AI in the core operation, it cannot effectively verify that the AI ​​output produces important bottlenecks.

The current verification system reveals an obvious problem: the organization is forced to choose between speed and accuracy. The manual verification process will not be expanded, and the unrelated AI output has too many risks. This challenge is particularly serious in accuracy not only the preferred regulatory industry-it is necessary.

The time of citation reached the critical moment of AI development. As the language model becomes more and more complicated, the demand for built -in verification has increased proplyly. We need to build a system that can be deployed confidently in a professional environment that can be negotiated accurate.

Lifting technical architecture

The magic of the citation lies in its documentation method. Quote unlike other traditional AI systems. These are usually regarded as simple text blocks. By quoting, the tool decomposes the source material into the so -called “block” of humans. These can be a separate sentence or user definition, which creates a granular foundation for verification.

This is technical decomposition:

Document processing and processing

Quote the documents are treated differently according to its format. For text files, for the total request, there is basically no restrictions on the 200,000 tokens that exceed the standard. This includes your context, prompt and document itself.

PDF processing is more complicated. The system handles PDF visually, not just text, which has some key constraints:

  • 32MB file size limit
  • Each file is up to 100 pages
  • Each page consumes 1,500-3,000 token cards

Token management

Now we turn to these restrictions. When you use reference, you need to consider your token budget carefully. This is its decomposition:

For standard text:

  • Complete request restrictions: 200,000 token cards
  • include: Context+prompt+documentation
  • Quote Output does not charge a separate fee

For PDF:

  • The higher token consumption of each page
  • Visual processing overhead
  • More complicated token calculations

Quote and rags: key differences

Quoting is not a retrieval enhancement generated (RAG) system, and this difference is important. Although the RAG system focuses on finding relevant information from the knowledge base, quoting the information you have chosen.

Think like this: RAG decides which information is used, and quotes to ensure accurate information. This means:

  • rag: Process information retrieval
  • Quote: Management information verification
  • Union potential: Both systems can work together

The choice of this architecture means referenced the accuracy in the context provided, and at the same time, a retrieval strategy is left for the complementary system.

Integrated route and performance

The setting is very simple: Quoting through anthropomorphic standard API, which means that if you are already using Claude, it will be among them. The system and the message API are directly integrated, eliminating the needs of separate file storage or complex infrastructure changes.

The pricing structure follows the token -based model of Anthropic, which has a key advantage: when you pay the input token from the source document, but the quoting output itself does not have additional costs. This creates a predictable cost structure that depends on the use of the structure.

Performance indicators tell a fascinating story:

  • The overall citation accuracy increased by 15 %
  • Completely eliminate source illusion (from 10 % to zero)
  • Sentence -level verification of each claim

Organizations (and individuals) of uncomfortable AI systems found that they are in an unfavorable position, especially in the regulatory industries or high -risk environments with accurate accuracy.

Looking forward to the future, we may see:

  • The integration of quotation characteristics becomes standard
  • The evolution of the verification system is beyond the text of other media
  • Establish a specific industry verification standard

The entire industry does need to re -consider the credibility and verification of AI. Users need to easily verify the point of each claim.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button