Analyze Module¶
LLM-based analysis stage for ranked candidates.
This module provides a minimal async analysis function that:
- Reduces context (abstract only for now)
- Calls the shared LLM model once per paper
- Returns structured AnalysisResult
instances
Retries and concurrency limits can be layered in the orchestrator later.
- async agent.pipeline.analyze.analyze_candidates(*, task_query, analysis_inputs)[source]¶
Analyze candidates via agents or a heuristic fallback.
When an API key and the environment flag
PIPELINE_USE_AGENTS_ANALYZE
is set, uses the configured LLM agent to produce structured outputs. Otherwise, computes a quick overlap-based heuristic.- Parameters:
task_query (
str
) – The task description that guides relevance.analysis_inputs (
List
[AnalysisInput
]) – Ranked inputs containing candidates and optional snippets.
- Return type:
- Returns:
One
AnalysisResult
per input, preserving order.