LangChain: Automating Large Language Model (LLM) Evaluation

Vikas 16 Feb, 2024
11 min read

Introduction

Large Language Models (LLMs) have captivated the world with their ability to generate human-quality text, translate languages, summarize content, and answer complex questions. Prominent examples include OpenAI’s GPT-3.5, Google’s Gemini, Meta’s Llama2, etc.

As LLMs become more powerful and sophisticated, so does the importance of measuring the performance of LLM-based applications. Evaluating LLMs is essential to ensuring their performance, reliability, and fairness in various NLP applications. In this article, we will explore the needs, challenges, and methods associated with evaluating large language models.

Learning Objectives

  • Understand the need for LLM evaluation
  • Explore the challenges faced in LLM evaluation
  • Learn about various ways to evaluate LLMs
  • Learn about LLM-based automated evaluation using LangChain
  • Explore various types of string evaluators and comparison evaluators available in LangChain

This article was published as a part of the Data Science Blogathon.

Need for LLM Evaluation

As language models continue to advance in their capability and size, researchers, developers, and business users face issues such as model hallucination, toxic, violent response, jailbreaking, etc.  

Below two-dimensional landscape represents the frequency and severity of various such issues. Hallucination may be observed frequently, but its severity is lower compared to Jailbreaking and prompt injection. Conversely, the frequency and severity of toxic and violent responses are at a moderate level.

Large Language Model (LLM) Evaluation

Let’s discuss such issues in detail.

Hallucination

Hallucination is a phenomenon that occurs in Large Language Models (LLMs), where the model generates text that is not factual, coherent, or inconsistent with the input context. While LLMs excel at generating text that appears coherent and contextually relevant, they can also occasionally produce outputs that go beyond the scope of the input context, resulting in hallucinatory or misleading information. For example, in summarization tasks, LLMs may generate summaries that contain information that is not present in the original text.

Toxic, Hate, Violent response

The generation of toxic, hate-filled, and violent responses from LLMs is a concerning aspect that highlights the ethical challenges. Such responses can be harmful and offensive to individuals and groups. We should evaluate LLM to assess its output quality and appropriateness.

Toxic, Hate, Violent response

Prompt Injection and Jailbreaking

Prompt injection is a technique that involves bypassing filters or manipulating the LLM by using carefully crafted prompts that cause the model to ignore previous instructions or perform unintended actions. This can result in unintended consequences, such as data leakage, unauthorized access, or other security breaches.

Prompt Injection and Jailbreaking

Jailbreaking

It typically refers to the generation of content or responses that involve circumventing restrictions, breaking rules, or engaging in unauthorized activities. A common method of jailbreaking is pretending.

Jailbreaking

Robust LLM evaluation is essential to mitigate such issues and ensures coherent and contextually aligned responses. High-quality LLM evaluation helps in:

  • Performance Assessment: Understanding the strengths and weaknesses of the model.
  • Fine-Tuning and Optimization: Identifying areas where the model can be fine-tuned and optimized.
  • Generalization and Robustness: Assessing the model’s ability to generalize its knowledge on diverse inputs/tasks.
  • User Experience: Evaluating how well the model responds to user inputs and generates contextually relevant and coherent outputs for a positive user experience.
  • Ethical Considerations: Identifying and addressing ethical concerns related to biases, misinformation, or potentially harmful outputs.
  • Further Improvement: Identifying the areas where further research is needed using state-of-the-art capabilities.

Challenges in LLM Evaluation

  • Non-deterministic outputs: Given the same input prompt, LLMs can produce different responses on different re-runs.
  • Lack of Ground Truth: Acquiring reference/labeled data of high quality can pose a challenge, particularly in situations where there are multiple acceptable responses or in tasks that involve open-ended requirements.
  • Real-World Applicability: Existing academic benchmark datasets don’t generalize well on enterprise-specific real-world use cases.
  • Subjectivity: Evaluation types such as Human Evaluation can be subjective and prone to bias. Various human assessors may hold differing viewpoints, and there might be a lack of uniformity in the evaluation criteria.

Ways to Evaluate LLMs

Numerous approaches exist for evaluating LLMs. Let’s explore some typical methods for assessing LLM outputs.

1. Human Evaluation

The most straightforward and reliable way to evaluate the performance of an LLM is through human evaluation. Human evaluators can assess various aspects of the model’s performance, such as its factual accuracy, coherence, relevance, etc. This can be done through pairwise comparisons, ranking, or assessing the relevance and coherence of the generated text.

Here are some tools that can aid in human evaluation: Argilla (an open-source platform) LabelStudio, and Prodigy.

2. End-user Feedback

End-user feedback (user ratings, surveys, platform usage, etc.) provides insights into how well the model performs in real-world scenarios and meets user expectations.

3. LLM-based Evaluation 

Leveraging the LLM itself to assess the quality of the outputs. It’s a dynamic approach that allows us to create autonomous and adaptive without the need for external evaluation or human intervention.

Frameworks for LLM-based evaluation: LangChain, RAGAS, LangSmith

4. Academic Benchmarks

Benchmarks provide standardized datasets and tasks that enable researchers and practitioners to compare the performance of different LLMs. Benchmark datasets (such as GLUE, SuperGLUE, SQuAD, etc.) cover a range of language tasks, such as question answering, text completion, summarization, translation, and sentiment analysis.

5. Standard Quantitative Metrics

Quantitative metrics such as BLEU score, ROUGE score, Perplexity, etc. measure surface-level similarities between generated text and reference data on specific tasks. However, they may not capture the broader language understanding capabilities of LLMs.

Pros and Cons of Different Evaluation Methods

Pros and Cons of different evaluation methods

LLM-based (Automated) Evaluation in LangChain

LangChain is an open-source framework for building LLM-powered applications. LangChain offers
various types of evaluators to measure performance and integrity on diverse data.

Each evaluator type in LangChain comes with ready-to-use implementations and an extensible API that allows for customization according to unique requirements. These evaluators can be applied to different chain and LLM implementations in the LangChain library.

Evaluator types in LangChain:

  • String Evaluators: These evaluators assess the predicted string for a given input, usually comparing it against a reference string.
  • Comparison Evaluators: These evaluators are designed to compare predictions from two runs on a common input.
  • Trajectory Evaluators: These are used to evaluate the entire trajectory of agent actions.

Let’s discuss each evaluator type in detail.

1. String Evaluators

A string evaluator is a component within LangChain designed to assess the performance of a language model by comparing its generated outputs (predictions) to a reference string (ground truth) or an input (such as a question or a prompt). These evaluators can be customized to tailor the evaluation process to fit your application’s specific requirements.

String Evaluator implementations in LangChain:

Criteria Evaluation

It allows to verify whether the output from an LLM or a Chain adheres to a specified set of
criteria. Langchain offers various criteria such as correctness, relevance, harmfulness, etc. Below is the complete list of criteria available in Langchain.

from langchain.evaluation import Criteria
print(list(Criteria))
Conciseness

Let’s explore a few examples by employing these predefined criteria.

a. Conciseness

We instantiate the criteria evaluator by invoking the load_evaluator() function and passing criteria as conciseness to check whether the output is concise or not.

from langchain.evaluation import load_evaluator
evaluator = load_evaluator("criteria", criteria="conciseness")
eval_result = evaluator.evaluate_strings(
    prediction="""Joe Biden is an American politician 
    who is the 46th and current president of the United States. 
    Born in Scranton, Pennsylvania on November 20, 1942, 
    Biden moved with his family to Delaware in 1953. 
    He graduated from the University of Delaware 
    before earning his law degree from Syracuse University. 
    He was elected to the New Castle County Council in 1970 
    and to the U.S. Senate in 1972.""",
    input="Who is the president of United States?",
)
print(eval_result)

All string evaluators provide an evaluate_strings() method that takes:

  • input (str) – The input prompt/query
  • prediction (str) – Model response/output

The criteria evaluators return a dictionary with the following attributes:  

  • reasoning: A string representing the “chain of thought reasoning” derived from the LLM generated before establishing the score.
  • value: A “Y” or “N” corresponding to the score.
  • score: Binary integer (0 or 1), where 1 indicates the output is compliant with the criteria, and 0 indicates non-compliance.
Conciseness

b. Correctness

Correctness criteria requires reference labels. Here we initialize the labeled_criteria evaluator and specify the reference string additionally.

evaluator = load_evaluator("labeled_criteria", criteria="correctness")

eval_result = evaluator.evaluate_strings(
    input="Is there any river on the moon?",
    prediction="There is no evidence of river on the Moon",
    reference="""In a hypothetical future, lunar scientists discovered 
    an astonishing phenomenon—a subterranean river 
    beneath the Moon's surface""",
)
print(eval_result)
Correctness

c. Custom Criteria

We can also define our own custom criteria by passing a dictionary in the form {“criterion_name”: “criterion_description”}.

from langchain.evaluation import EvaluatorType

custom_criteria = {
    "numeric": "Does the output contain numeric information?",
    "mathematical": "Does the output contain mathematical information?"
}
prompt = "Tell me a joke"

output = """
Why did the mathematician break up with his girlfriend?

Because she had too many "irrational" issues!
"""
eval_chain = load_evaluator(
    EvaluatorType.CRITERIA,
    criteria=custom_criteria,
)
eval_result = eval_chain.evaluate_strings(prediction = output, input = prompt)
print("===================== Multi-criteria evaluation =====================")
print(eval_result)
Custom Criteria

Embedding Distance

Embedding distance measures the semantic similarity (or dissimilarity) between a prediction and
a reference label string. It returns a distance score, signifying that a lower numerical value indicates a higher similarity between the prediction and the reference, based on their embedded representation.

evaluator = load_evaluator("embedding_distance")
evaluator.evaluate_strings(prediction="Total Profit is 04.25 Cr", 
reference="Total return is 4.25 Cr")
Embedding distance

By default, the Embedding distance evaluator uses cosine distance. Below is the complete list of distances supported.

from langchain.evaluation import EmbeddingDistance
list(EmbeddingDistance)
Embedding distance

Exact Match

Probably the simplest way to evaluate an LLM output. We can also relax the exactness when comparing strings.

from langchain.evaluation import ExactMatchStringEvaluator
exact_match_evaluator = ExactMatchStringEvaluator()
exact_match_evaluator = ExactMatchStringEvaluator(ignore_case=True)
exact_match_evaluator.evaluate_strings(
    prediction="Data Science",
    reference="Data science",
)
Exact match

Evaluating Structured Output

JSON evaluators help in parsing the LLM’s string output (specifically JSON output format). Following JSON evaluators provide the functionality to check the model’s output consistency.

a. JSValidityEvaluator

  • Checks the validity of a JSON string prediction
  • Requires Input?: No
  • Requires Reference?: No
from langchain.evaluation import JsonValidityEvaluator

evaluator = JsonValidityEvaluator()

prediction = '{"name": "Hari", "Exp": 5, "Location": "Pune"}'

result = evaluator.evaluate_strings(prediction=prediction)
print(result)
"
prediction = '{"name": "Hari", "Exp": 5, "Location": "Pune",}'
result = evaluator.evaluate_strings(prediction=prediction)
print(result)
"

b. JsonEqualityEvaluator

  • Assesses whether a JSON prediction matches a given reference or not.
  • Requires Input?: No
  • Requires Reference?: Yes
from langchain.evaluation import JsonEqualityEvaluator
evaluator = JsonEqualityEvaluator()
result = evaluator.evaluate_strings(prediction='{"Exp": 2}', reference='{"Exp": 2.5}')
print(result)
"

c. JsonEditDistanceEvaluator

  • Computes a normalized Damerau-Levenshtein distance between two canonicalized JSON strings.
  • Requires Input?: No
  • Requires Reference?: Yes
  • Distance Function: Damerau-Levenshtein (by default)
from langchain.evaluation import JsonEditDistanceEvaluator

evaluator = JsonEditDistanceEvaluator()


result = evaluator.evaluate_strings(
    prediction='{"name": "Ram", "Exp": 2}', reference='{"name": "Rama", "Exp": 2}'
)
print(result)
 Large Language Models

d. JsonSchemaEvaluator

  • Validates a JSON prediction against a provided JSON schema
  • Requires Input?: Yes
  • Requires Reference?: Yes (A JSON schema)
  • Score: True (No errors) or False (Error occurred)
from langchain.evaluation import JsonSchemaEvaluator

evaluator = JsonSchemaEvaluator()

result = evaluator.evaluate_strings(
    prediction='{"name": "Rama", "Exp": 4}',
    reference='{"type": "object", "properties": {"name": {"type": "string"},'
    '"Exp": {"type": "integer", "minimum": 5}}}',
)
print(result)
"

Regex Match

Using regex match we can evaluate model predictions against a custom regex.

For example below regex match evaluator checks for the presence of a YYYY-MM-DD string.

from langchain.evaluation import RegexMatchStringEvaluator
evaluator = RegexMatchStringEvaluator()
evaluator.evaluate_strings(
    prediction="Joining date is 2021-04-26",
    reference=".*\\b\\d{4}-\\d{2}-\\d{2}\\b.*",
)
 Large Language Models

Scoring Evaluator

The scoring evaluator helps in assessing the model’s predictions on a specified scale (default is 1-10) based on the custom criteria/rubric. Below is an example of custom accuracy criteria that grades the prediction on a scale of 1 to 10 based on the reference specified.

from langchain.chat_models import ChatOpenAI
from langchain.evaluation import load_evaluator
accuracy_criteria = {
    "accuracy": """
Score 1: The answer is completely unrelated to the reference.
Score 3: The answer has minor relevance but does not align with the reference.
Score 5: The answer has moderate relevance but contains inaccuracies.
Score 7: The answer aligns with the reference but has minor errors or omissions.
Score 10: The answer is completely accurate and aligns perfectly with the reference."""
}
evaluator = load_evaluator(
    "labeled_score_string",
    criteria=accuracy_criteria,
    llm=ChatOpenAI(model="gpt-4"),
)

Let’s score different predictions based on the above scoring evaluator.

# Correct
eval_result = evaluator.evaluate_strings(
    prediction="You can find them in the dresser's third drawer.",
    reference="The socks are in the third drawer in the dresser",
    input="Where are my socks?",
)
print(eval_result)
 Large Language Models
# Correct but lacking information
eval_result = evaluator.evaluate_strings(
    prediction="You can find them in the dresser.",
    reference="The socks are in the third drawer in the dresser",
    input="Where are my socks?",
)
print(eval_result)
 Large Language Models
# Incorrect
eval_result = evaluator.evaluate_strings(
    prediction="You can find them in the dog's bed.",
    reference="The socks are in the third drawer in the dresser",
    input="Where are my socks?",
)
print(eval_result)
 Large Language Models

String Distance

One of the simplest ways to compare the string output of an LLM or chain against a reference label is by using string distance measurements such as Levenshtein or postfix distance.

from langchain.evaluation import load_evaluator
evaluator = load_evaluator("string_distance")
evaluator.evaluate_strings(
    prediction="Senior Data Scientist",
    reference="Data Scientist",
)
 Large Language Models

2. Comparison Evaluators

Comparison evaluators in LangChain help measure two different chains or LLM outputs. It is helpful for comparative analyses, such as A/B testing between two language models or comparing two different outputs from the same model.

Comparison Evaluator implementations in LangChain:

  • Pairwise String Comparison
  • Pairwise Embedding Distance

Pairwise String Comparison

This comparison helps in answering questions like –

  • Which LLM or prompt produces a preferred output for a given question?
  • Which examples should I include for the few-shot example selection?
  • Which output is better to include for fine-tuning?
from langchain.evaluation import load_evaluator

evaluator = load_evaluator("labeled_pairwise_string")
evaluator.evaluate_string_pairs(
    prediction="there are 5 days",
    prediction_b="7",
    input="how many days in a week?",
    reference="Seven",
)
 Large Language Models

Pairwise Embedding Distance

A method for gauging the similarity (or dissimilarity) between two predictions made on a common or similar input by embedding the predictions and calculating a vector distance between the two embeddings.

from langchain.evaluation import load_evaluator
evaluator = load_evaluator("pairwise_embedding_distance")
evaluator.evaluate_string_pairs(
    prediction="Rajasthan is hot in June", prediction_b="Rajasthan is warm in June."
)
 Large Language Models

Conclusion

In this article, we discussed about LLM evaluation, its need, and various popular methods for evaluating LLMs. We explored different types of built-in, custom evaluators and different types of criteria offered by
LangChain to assess the LLM’s output quality and appropriateness.

Key Takeaways

  • Assessing LLMs is crucial for confirming their effectiveness, dependability, and impartiality across diverse NLP uses, as they grow more sophisticated and face challenges like hallucination, toxicity, violence, and security evasion.
  • LangChain facilitates automated evaluation of LLM by offering various types of ready-to-use and customizable evaluators.
  • In addition to the score, LangChain evaluators also furnish reasoning that encapsulates the “chain of thought reasoning” derived from the LLM.
  • LangChain offers a variety of string evaluators, including Criteria Evaluation, Embedded Distance, Exact Match, JSON Evaluators, Scoring Evaluators, and String Distance.

Frequently Asked Questions

Q1. What is the purpose of LLM evaluation?

A. The purpose of LLM evaluation is to ensure the performance, reliability, and fairness of Large Language Models (LLMs). It is essential for identifying and addressing issues such as model hallucination, toxic or violent responses, and vulnerabilities like prompt injection and jailbreaking, which can lead to unintended and potentially harmful outcomes.

Q2. Major challenges in LLM evaluation?

A. The major challenges in Large Language Model (LLM) evaluation include non-deterministic outputs, difficulty in obtaining a clear ground truth, limitations in applying academic benchmarks to real-world use cases, and the subjectivity and potential bias in human evaluations.

Q3. What are the popular ways to evaluate LLMs?

A. The popular ways to evaluate Large Language Models (LLMs) include human evaluation, utilizing end-user feedback through ratings and surveys, employing LLM-based evaluation frameworks like LangChain and RAGAS, leveraging academic benchmarks such as GLUE and SQuAD, and using standard quantitative metrics like BLEU and ROUGE scores.

Q4. What are the various criteria offered by LangChain for evaluating LLMs?

A. LangChain provides a range of evaluation criteria that include (but not limited to) conciseness, relevance, correctness, coherence, harmfulness, maliciousness, helpfulness, controversiality, misogyny, criminality, insensitivity, depth, creativity, and detail, etc.

Q5. What are the types of comparison evaluators available in LangChain?

The types of comparison evaluators available in LangChain include Pairwise String Comparison, Pairwise Embedding Distance, and Custom Pairwise Evaluator. These tools facilitate comparative analysis, such as A/B testing between language models or assessing different outputs from the same model.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Vikas 16 Feb, 2024

A Data Science professional with 6.5 years of experience in data science, machine learning, and programming. Hands-on experience in different domains like data analytics, deep learning, big data, and natural language processing.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers