Camel.benchmarks.ragbench
RagasFields
Constants for RAGAS evaluation field names.
annotate_dataset
Annotate the dataset by adding context and answers using the provided functions.
Parameters:
- dataset (Dataset): The input dataset to annotate.
- context_call (Optional[Callable[[Dict[str, Any]], List[str]]]): Function to generate context for each example.
- answer_call (Optional[Callable[[Dict[str, Any]], str]]): Function to generate answer for each example.
Returns:
Dataset: The annotated dataset with added contexts and/or answers.
rmse
Calculate Root Mean Squared Error (RMSE).
Parameters:
- input_trues (Sequence[float]): Ground truth values.
- input_preds (Sequence[float]): Predicted values.
Returns:
Optional[float]: RMSE value, or None if inputs have different lengths.
auroc
Calculate Area Under Receiver Operating Characteristic Curve (AUROC).
Parameters:
- trues (Sequence[bool]): Ground truth binary values.
- preds (Sequence[float]): Predicted probability values.
Returns:
float: AUROC score.
ragas_calculate_metrics
Calculate RAGAS evaluation metrics.
Parameters:
- dataset (Dataset): The dataset containing predictions and ground truth.
- pred_context_relevance_field (Optional[str]): Field name for predicted context relevance.
- pred_faithfulness_field (Optional[str]): Field name for predicted faithfulness.
- metrics_to_evaluate (Optional[List[str]]): List of metrics to evaluate.
- ground_truth_context_relevance_field (str): Field name for ground truth relevance.
- ground_truth_faithfulness_field (str): Field name for ground truth adherence.
Returns:
Dict[str, Optional[float]]: Dictionary of calculated metrics.
ragas_evaluate_dataset
Evaluate the dataset using RAGAS metrics.
Parameters:
- dataset (Dataset): Input dataset to evaluate.
- contexts_field_name (Optional[str]): Field name containing contexts.
- answer_field_name (Optional[str]): Field name containing answers.
- metrics_to_evaluate (Optional[List[str]]): List of metrics to evaluate.
Returns:
Dataset: Dataset with added evaluation metrics.
RAGBenchBenchmark
RAGBench Benchmark for evaluating RAG performance.
This benchmark uses the rungalileo/ragbench dataset to evaluate retrieval-augmented generation (RAG) systems. It measures context relevancy and faithfulness metrics as described in https://arxiv.org/abs/2407.11005.
Parameters:
- processes (int, optional): Number of processes for parallel processing.
- subset (str, optional): Dataset subset to use (e.g., “hotpotqa”).
- split (str, optional): Dataset split to use (e.g., “test”).
init
download
Download the RAGBench dataset.
load
Load the RAGBench dataset.
Parameters:
- force_download (bool, optional): Whether to force download the data.
run
Run the benchmark evaluation.
Parameters:
- agent (ChatAgent): Chat agent for generating answers.
- auto_retriever (AutoRetriever): Retriever for finding relevant contexts.
Returns:
Dict[str, Optional[float]]: Dictionary of evaluation metrics.