Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.aevyra.ai/llms.txt

Use this file to discover all available pages before exploring further.

Adapters convert existing telemetry formats into an AgentTrace. Use them when your agent already emits logs and you don’t want to add @span decorators, or when you’re working with a framework you don’t control (LangGraph, CrewAI, AutoGen, OpenClaw, etc.).

OpenClaw JSONL

OpenClaw streams telemetry as newline-delimited JSON. The adapter handles LLM turns, MCP tool calls, agent lifecycle events, and start/end event pairing:
from pathlib import Path
from aevyra_witness.adapters import from_openclaw_jsonl

lines = Path("run.jsonl").read_text().splitlines()
trace = from_openclaw_jsonl(lines)
Pass pre-parsed dicts instead of strings if you already have them:
import json
events = [json.loads(line) for line in lines if line.strip()]
trace = from_openclaw_jsonl(events)
Mark prompts for Reflex — tell the adapter which prompt ids to mark as optimization targets, or let it pick up "optimize": true events directly from the stream:
trace = from_openclaw_jsonl(
    lines,
    optimize_prompt_ids=["planner_v1", "responder_v2"],
    ideal="The correct answer is ...",
    trace_metadata={"session_id": "sess_abc123"},
)
Event types recognised — the adapter handles multiple OpenClaw versions and their evolving event names:
FamilyRecognised event types
LLM turnsllm.request, llm.response, llm_turn, reason, plan, respond
Tool callstool.call, tool_call, mcp.call, any type containing tool
Agent / Task Brainagent.start, agent.finish, task.*, cron.*, acp.*
Unknown event types are skipped with a debug log — a single malformed event never aborts the import. Start/end pairing — OpenClaw can emit paired *.start / *.end events sharing a span_id. The adapter merges them automatically into single completed spans. Unmatched starts are surfaced with an error field so Origin can reason about truncated runs.

OpenTelemetry

The OTel adapter converts any OpenTelemetry spans — from the Python SDK, OTLP JSON export, or any framework using the GenAI semantic conventions — into an AgentTrace. Supported frameworks (any that emit OTel GenAI spans):
  • LangGraph / LangChain (via opentelemetry-instrumentation-langchain)
  • CrewAI (built-in OTel export)
  • AutoGen / Microsoft Agent Framework
  • Vercel AI SDK (TypeScript, via OTLP HTTP export)
  • OpenClaw (diagnostics.otel.logs path)
  • Any custom framework emitting gen_ai.* attributes

From the Python SDK

from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export.in_memory_span_exporter import InMemorySpanExporter
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from aevyra_witness.adapters import from_otel_spans

exporter = InMemorySpanExporter()
provider = TracerProvider()
provider.add_span_processor(SimpleSpanProcessor(exporter))

# ... run your LangGraph / CrewAI / AutoGen agent ...

spans = exporter.get_finished_spans()
trace = from_otel_spans(spans)

From OTLP JSON export

If you’re collecting spans from a non-Python service (TypeScript, Go, etc.) via an OTLP collector that writes JSON:
import json
from pathlib import Path
from aevyra_witness.adapters import from_otel_spans

spans = json.loads(Path("spans.json").read_text())
trace = from_otel_spans(spans)

Attribute mapping

The adapter reads the OTel GenAI semantic conventions:
OTel attributeTraceNode field
gen_ai.operation.namekind classification
gen_ai.request.modelmetadata["model"]
gen_ai.usage.input_tokenstokens (partial)
gen_ai.usage.output_tokenstokens (partial)
gen_ai.content.prompt eventinput
gen_ai.content.completion eventoutput
gen_ai.tool.namename (KIND_TOOL)
mcp.server.namemetadata["mcp_server"]
error.type / exception.messageerror
Span classification:
  • gen_ai.operation.name in {chat, text_completion, completions}KIND_REASON
  • gen_ai.tool.name present, or name starts with mcp.KIND_TOOL
  • gen_ai.system present without a tool → KIND_REASON
  • Everything else → KIND_OTHER

Optional parameters

trace = from_otel_spans(
    spans,
    optimize_prompt_ids=["planner_v1"],   # mark these prompts for Reflex
    ideal="Expected answer ...",
    trace_metadata={"run_id": "run_001"},
)