AgentResponse
- score (float): A similarity score between 0 and 1 that compares the current answer to the correct answer. Must be within the range [0, 1].
VerificationResponse
- is_correct (bool): Boolean indicating if the answer is correct.
CoTDataGenerator
- Monte Carlo Tree Search (MCTS)
- Binary Search Error Detection
- Dual-Agent Verification System
- Solution Tree Management
- chat_agent (Optional[ChatAgent]): Optional single agent for both tasks (legacy mode). (default::obj:
None
) - generator_agent (Optional[ChatAgent]): Optional specialized agent for answer generation. (default::obj:
None
) - verifier_agent (Optional[ChatAgent]): Optional specialized agent for answer verification. (default::obj:
None
) - golden_answers (Dict[str, str]): Dictionary containing pre-defined correct answers for validation and comparison. Required for answer verification.
- search_limit (int): Maximum number of search iterations allowed. (default::obj:
100
)
init
- Single-agent mode (legacy): Pass a single chat_agent that will be used for both generation and verification.
- Dual-agent mode: Pass separate generator_agent and verifier_agent for specialized tasks.
- chat_agent (Optional[ChatAgent]): Optional single agent for both tasks (legacy mode). (default::obj:
None
) - generator_agent (Optional[ChatAgent]): Optional specialized agent for answer generation. (default::obj:
None
) - verifier_agent (Optional[ChatAgent]): Optional specialized agent for answer verification. (default::obj:
None
) - golden_answers (Dict[str, str]): Dictionary containing pre-defined correct answers for validation and comparison. Required for answer verification.
- search_limit (int): Maximum number of search iterations allowed. (default::obj:
100
)
get_answer
- question (str): The question to ask.
- context (str): Additional context for the question. (default::obj:
""
)
verify_answer
- question (str): The question being answered.
- answer (str): The answer to verify.
- If the provided question doesn’t exist in the golden answers
- If the answer’s meaning differs from the golden answer
evaluate_partial_solution
- question (str): The question being solved.
- partial_solution (str): The partial solution generated so far. (default::obj:
""
)
binary_search_error
- question (str): The question being solved.
- solution (str): The complete solution to analyze.
solve
- Try to solve directly - if correct, return the solution.
- If not correct, perform a search by iteratively generating new solutions and evaluating their similarity scores to find a good solution. The search process involves: a. Generation: Generate new solution candidates using the generator agent. b. Evaluation: Score each solution candidate for similarity to the golden answer. c. Selection: Keep the best-scoring candidate found so far. d. Early stopping: If a sufficiently high-scoring solution is found (score > 0.9), stop early.
- If the solution isn’t perfect, use binary search to locate errors.
- Generate a new solution based on the correct part of the initial solution.
- question (str): The question to solve.
import_qa_from_json
- data (Union[str, Dict[str, str]]): Either a path to a JSON file containing QA pairs or a dictionary of question-answer pairs. If a string is provided, it’s treated as a file path. The expected format is:
{"question1": "answer1", "question2": "answer2", ...}
export_solutions
- solutions: The solution tree with intermediate steps
- golden_answers: The reference answers used for verification
- export_time: ISO format timestamp of the export
- filepath (str, optional): Path where the JSON file will be saved. (default::obj:
'solutions.json'
)