Origin uses one LLM for attribution — the model that reads the trace, the score, and the rubric, and returns the ranked culprit list. This is different from the model your agent pipeline runs on, which Origin does not control.Documentation Index
Fetch the complete documentation index at: https://docs.aevyra.ai/llms.txt
Use this file to discover all available pages before exploring further.
Default: Anthropic Claude
Claude is included when youpip install aevyra-origin. No extra flags needed.
claude-sonnet-4-5. It’s fast, capable, and well-suited
to the diagnostic reasoning Origin requires.
Provider format
Models are specified asprovider/model — the same convention as aevyra-reflex:
Provider reference
| Provider | Extra | Env var |
|---|---|---|
| Anthropic | (included) | ANTHROPIC_API_KEY |
| OpenAI | [openai] | OPENAI_API_KEY |
| OpenRouter | [openai] | OPENROUTER_API_KEY |
| Together AI | [openai] | TOGETHER_API_KEY |
| Groq | [openai] | GROQ_API_KEY |
| Ollama | [openai] | — |
CLI provider format
The CLI usesprovider/model strings via --model:
claude-sonnet-4-5 infers anthropic,
anything else infers openai.
Choosing a model
For most traces, a mid-tier model likeclaude-sonnet-4-5 or gpt-4o is
sufficient. Use a stronger model (claude-opus-4-6) when the trace is long
or the rubric has many competing criteria. For quick debugging runs, a fast
local model (Ollama qwen3:8b) works well.
Attribution quality depends more on clear rubric writing than on model choice —
a specific, criterion-by-criterion rubric outperforms a vague one regardless
of model.