Adapters convert existing telemetry formats into anDocumentation Index
Fetch the complete documentation index at: https://docs.aevyra.ai/llms.txt
Use this file to discover all available pages before exploring further.
AgentTrace.
Use them when your agent already emits logs and you don’t want to add
@span decorators, or when you’re working with a framework you don’t
control (LangGraph, CrewAI, AutoGen, OpenClaw, etc.).
OpenClaw JSONL
OpenClaw streams telemetry as newline-delimited JSON. The adapter handles LLM turns, MCP tool calls, agent lifecycle events, and start/end event pairing:"optimize": true events
directly from the stream:
| Family | Recognised event types |
|---|---|
| LLM turns | llm.request, llm.response, llm_turn, reason, plan, respond |
| Tool calls | tool.call, tool_call, mcp.call, any type containing tool |
| Agent / Task Brain | agent.start, agent.finish, task.*, cron.*, acp.* |
*.start / *.end
events sharing a span_id. The adapter merges them automatically into
single completed spans. Unmatched starts are surfaced with an error
field so Origin can reason about truncated runs.
OpenTelemetry
The OTel adapter converts any OpenTelemetry spans — from the Python SDK, OTLP JSON export, or any framework using the GenAI semantic conventions — into anAgentTrace.
Supported frameworks (any that emit OTel GenAI spans):
- LangGraph / LangChain (via
opentelemetry-instrumentation-langchain) - CrewAI (built-in OTel export)
- AutoGen / Microsoft Agent Framework
- Vercel AI SDK (TypeScript, via OTLP HTTP export)
- OpenClaw (
diagnostics.otel.logspath) - Any custom framework emitting
gen_ai.*attributes
From the Python SDK
From OTLP JSON export
If you’re collecting spans from a non-Python service (TypeScript, Go, etc.) via an OTLP collector that writes JSON:Attribute mapping
The adapter reads the OTel GenAI semantic conventions:| OTel attribute | TraceNode field |
|---|---|
gen_ai.operation.name | kind classification |
gen_ai.request.model | metadata["model"] |
gen_ai.usage.input_tokens | tokens (partial) |
gen_ai.usage.output_tokens | tokens (partial) |
gen_ai.content.prompt event | input |
gen_ai.content.completion event | output |
gen_ai.tool.name | name (KIND_TOOL) |
mcp.server.name | metadata["mcp_server"] |
error.type / exception.message | error |
gen_ai.operation.namein{chat, text_completion, completions}→KIND_REASONgen_ai.tool.namepresent, or name starts withmcp.→KIND_TOOLgen_ai.systempresent without a tool →KIND_REASON- Everything else →
KIND_OTHER