Found more than one `BaseRetriever` in app while trying to use Trulens evaluate results for different langchain chains

Binjan Iyer - May 28 - - Dev Community

Hello All,
I am using "create_retrieval_chain", "create_history_aware_retriever" and "create_stuff_documents_chain" for my RAG application. when I integrate TruLens for the evaluate results. It shows me error:

ValueError: Found more than one `BaseRetriever` in app:
        <class 'langchain_core.vectorstores.VectorStoreRetriever'> at bound.branches[0][1].last
        <class 'langchain_core.vectorstores.VectorStoreRetriever'> at bound.default.last
Enter fullscreen mode Exit fullscreen mode

code:

# Initialize the language model with the OpenAI API key and model name from environment variables
    llm = ChatOpenAI(
            api_key=os.environ["OPENAI_API_KEY"],
            model_name=os.environ["OPENAI_API_GPT_MODEL"],
            temperature=0.2
        )               

    document_chain_prompt = ChatPromptTemplate.from_messages(DOCUMENT_CHAIN_PROMT)

    # Create the document chain using the language model and the prompt template
    document_chain = create_stuff_documents_chain(
        llm,
        document_chain_prompt
    )            

    # Define the prompt template for generating a search query based on the chat history
    history_aware_retriever_chain_prompt = ChatPromptTemplate.from_messages(HISTORY_AWARE_RETRIEVER_CHAIN_PROMPT)

    # Create a history-aware retriever chain using the language model, retriever, and the prompt template
    history_aware_retriever_chain = create_history_aware_retriever(
        llm,
        vectDB_as_retriever,
        history_aware_retriever_chain_prompt
    )

    #################################################################

    # select context to be used in feedback. the location of context is app specific.
    context = App.select_context(history_aware_retriever_chain)

    # Define a groundedness feedback function
    f_groundedness = (
        Feedback(provider.groundedness_measure_with_cot_reasons)
        .on(context.collect()) # collect context chunks into a list
        .on_output()
    )

    # Question/answer relevance between overall question and answer.
    f_answer_relevance = (
        Feedback(provider.relevance)
        .on_input_output()
    )
    # Question/statement relevance between question and each context chunk.
    f_context_relevance = (
        Feedback(provider.context_relevance_with_cot_reasons)
        .on_input()
        .on(context)
        .aggregate(np.mean)
    )

    tru_recorder = TruChain(history_aware_retriever_chain,
        app_id=os.environ["truLens_app_id"],
        feedbacks=[f_answer_relevance, f_context_relevance, f_groundedness])
    #########################################################################    

    # Create a retrieval chain combining the history-aware retriever chain and the document chain
    retrieval_chain = create_retrieval_chain(history_aware_retriever_chain, document_chain)

    # Execute the chain with input documents and query
    with get_openai_callback() as cb:
        # Invoke the retrieval chain with the chat history and user input
        response = retrieval_chain.invoke({
            "chat_history": chat_history,
            "input": prompt,  # Required for HISTORY_AWARE_RETRIEVER_CHAIN_PROMPT
        })
        print(cb)  # Printing callback information
Enter fullscreen mode Exit fullscreen mode
.