Add @span decorators to the functions you want to trace. Each
decorated function becomes one span in the captured trace:
from aevyra_witness.runtime import span, trace@span("classify")def classify(ticket: str) -> str: return "billing/refund"@span("search_kb", optimize=True, prompt_id="kb_search_v2")def search_kb(topic: str) -> list[str]: return ["refund_policy.md", "billing_faq.md"]@span("answer", optimize=True, prompt_id="answer_v1")def answer(question: str, docs: list[str]) -> str: return "You can request a refund within 30 days by contacting support."def my_agent(question: str) -> str: topic = classify(question) docs = search_kb(topic) return answer(question, docs)# Run under a tracer to capture the executionwith trace() as tracer: output = my_agent("I was charged twice — how do I get a refund?")captured: AgentTrace = tracer.finish()print(captured.to_trace_text())
The tracer automatically captures each span’s input, output, and timing.
No changes to function signatures required.
from aevyra_witness import AgentTrace# Human-readable renderingprint(trace.to_trace_text())# Count spans by kindfor node in trace.nodes: print(f"{node.id} {node.kind:<10} {node.name}")# Serialise — pass to Origin, save to disk, ship over HTTPimport jsonPath("trace.json").write_text(trace.to_json(indent=2))# Deserialisetrace2 = AgentTrace.from_dict(json.loads(Path("trace.json").read_text()))
Set optimize=True and a prompt_id on any span whose prompt you want
Reflex to improve. Spans sharing the same prompt_id are treated as
instances of the same prompt across steps: