On Thursday, Anthropic intAnthropic’s New Citations Feature for Claude: A Step Toward Enhanced Accuracy and Trustoduced a new API feature called Citations, designed to help Claude models avoid the common issue of confabulations or hallucinations by linking their responses directly to source documents. This new tool allows developers to upload documents (PDFs and plaintext files) into Claude’s context window, enabling the AI to automatically cite specific passages used in its answers.
How Citations Work
When the Citations feature is enabled, Anthropic’s API processes the user-provided documents by chunking them into sentences. These sentences, along with any additional context provided by the user, are then passed to Claude with the user’s query. The model will use this context to generate answers, referencing the specific passages it utilized.
As Anthropic explains, this feature has a wide range of potential applications, including:
- Summarizing case files with source-linked key points
- Answering questions from financial documents with traced references
- Powering support systems that cite specific product documentation
In its internal testing, Anthropic found that Citations improved recall accuracy by up to 15 percent compared to custom citation systems built by users within prompts. Although a 15% improvement may seem modest, Simon Willison, a well-known AI researcher, highlighted the feature’s significance due to its integration of Retrieval-Augmented Generation (RAG) techniques. RAG is an approach where, after retrieving relevant portions of documents, the model generates an answer that includes these fragments, ensuring more accurate and contextually relevant responses.
>>>Li318991PVYTL Replacement Battery for Blackview Mega 1
Willison pointed out that while using citations helps verify accuracy, building a system that does so consistently is challenging. However, Citations seems to be a step in the right direction by integrating RAG techniques directly into the model. As Willison writes on his blog, “The core of the Retrieval Augmented Generation (RAG) pattern is to take a user’s question, retrieve portions of documents that might be relevant to that question and then answer the question by including those text fragments in the context provided to the LLM.” This setup helps mitigate the risk of models answering based on outdated or incorrect training data.
Early adopters of Citations have reported promising results. For instance, Thomson Reuters, which uses Claude to power its CoCounsel legal AI reference platform, expressed excitement about the potential of Citations to minimize hallucinations and increase trust in AI-generated content. In addition, Endex, a financial technology company, shared that Citations helped reduce source confabulations from 10% to zero, while also increasing the number of references per response by 20%.
Despite these promising results, Anthropic and other developers caution that relying on any language model to accurately relay reference information still carries inherent risks, especially when the technology is still evolving. The company itself emphasizes that this capability should be viewed as part of a broader effort to improve the reliability and trustworthiness of AI-generated content, and it may need further development and testing in real-world applications.
>>>01AC366 Replacement Battery for IBM Storwize V5000 V5010 V5030 Gen2
Pricing and Availability
Citations is available for Claude 3.5 Sonnet and Claude 3.5 Haiku models through both the Anthropic API and Google Cloud’s Vertex AI platform. According to Anthropic’s token-based pricing, quoting text from a source document won’t count toward the output token costs. For instance, sourcing a 100-page document as a reference would cost around $0.30 with Claude 3.5 Sonnet or $0.08 with Claude 3.5 Haiku..