Documentation Index
Fetch the complete documentation index at: https://docs.aevyra.ai/llms.txt
Use this file to discover all available pages before exploring further.
Built-in providers
| Provider | Name | Env var |
|---|
| OpenAI | openai | OPENAI_API_KEY |
| Anthropic | anthropic | ANTHROPIC_API_KEY |
| Google Gemini | google | GOOGLE_API_KEY |
| Mistral | mistral | MISTRAL_API_KEY |
| Cohere | cohere | COHERE_API_KEY |
| OpenRouter | openrouter | OPENROUTER_API_KEY |
Check which keys are configured:
Inline flags
Pass --model (or -m) once per model in provider/model format:
aevyra-verdict run data.jsonl \
-m openai/gpt-5.4-nano \
-m qwen/qwen3.5-9b \
-m google/gemini-2.0-flash
Config file
For more than a couple of models, use a config file. Supports .yaml, .json, and .toml.
aevyra-verdict run data.jsonl --config models.yaml
# models.yaml
models:
- provider: openai
model: gpt-5.4-nano
label: gpt-5.4-nano
- provider: openrouter
model: qwen/qwen3.5-9b
label: qwen3.5-9b
- provider: openrouter
model: meta-llama/llama-3.1-8b-instruct
label: llama-openrouter
The label field sets the display name in results. It’s optional — defaults to provider/model.
OpenRouter
OpenRouter gives you access to 200+ models across every major
provider with a single API key. Model names follow the provider/model format listed
on openrouter.ai/models.
aevyra-verdict run data.jsonl -m openrouter/mistralai/mistral-large
from aevyra_verdict.providers import get_provider
provider = get_provider(
"openrouter",
"meta-llama/llama-3.1-8b-instruct",
site_url="https://yoursite.com", # optional, for OpenRouter analytics
app_name="aevyra-verdict",
)
Local models (vLLM / Ollama)
Any OpenAI-compatible local server works via the openai provider with a custom base_url.
Start the server:vllm serve meta-llama/Llama-3.1-8B-Instruct
Config entry:- provider: openai
model: meta-llama/Llama-3.1-8B-Instruct
base_url: http://localhost:8000/v1
api_key: "none" # pragma: allowlist secret
label: llama-local
Start the server:Config entry:- provider: openai
model: llama3.1
base_url: http://localhost:11434/v1
api_key: "ollama" # pragma: allowlist secret
label: llama-ollama
Custom providers
Subclass Provider and register it:
from aevyra_verdict.providers import Provider, register_provider, CompletionResult
class MyProvider(Provider):
name = "my_provider"
def complete(self, messages, temperature=0.0, max_tokens=1024, **kwargs):
# call your API here
return CompletionResult(
text="response text",
model=self.model,
provider=self.name,
latency_ms=100.0,
)
register_provider("my_provider", MyProvider)
Python API
from aevyra_verdict.providers import get_provider
provider = get_provider("openai", "gpt-5.4-nano")
result = provider.complete([{"role": "user", "content": "Hello"}])
print(result.text)
print(result.latency_ms)
print(result.usage) # {"prompt_tokens": 10, "completion_tokens": 5}