Documentation Index
Fetch the complete documentation index at: https://docs.timbal.ai/llms.txt
Use this file to discover all available pages before exploring further.
Conditional Execution
The when parameter controls whether a step runs based on runtime conditions:
workflow = (
Workflow(name="router")
.step(classify, text="Urgent: server down")
.step(handle_urgent,
text="Server down",
when=lambda: get_run_context().step_span("classify").output == "urgent")
.step(handle_normal,
text="Server down",
when=lambda: get_run_context().step_span("classify").output == "normal")
)
Both handle_urgent and handle_normal wait for classify to complete. Only the one whose condition is met will execute.
Skipped Steps
If a step’s condition is not met, it is skipped along with all its dependents:
workflow = (
Workflow(name="pipeline")
.step(validate_input, data="...")
.step(process,
when=lambda: get_run_context().step_span("validate_input").output == "valid")
.step(save_results,
data=lambda: get_run_context().step_span("process").output)
)
If validate_input returns "invalid", both process and save_results are skipped — save_results depends on process, which never runs.
Not all dependents of a skipped step are skipped. When a step uses depends_on, it only waits for the referenced steps to resolve (either complete or be skipped) — it doesn’t need their data. This is useful when you don’t know which branch will run:
workflow = (
Workflow(name="pipeline")
.step(classify, text="new customer signup")
.step(handle_new,
data="signup data",
when=lambda: get_run_context().step_span("classify").output == "new")
.step(handle_existing,
data="update data",
when=lambda: get_run_context().step_span("classify").output == "existing")
.step(finalize, depends_on=["handle_new", "handle_existing"])
)
handle_new and handle_existing have inverse conditions — only one runs. finalize depends on both via depends_on, so it waits for both to resolve (one completes, the other is skipped) and then executes regardless.
Conditional with Agents
Use an LLM to make routing decisions:
classifier = Agent(
name="classifier",
model="openai/gpt-4.1-mini",
system_prompt="Classify the message as 'technical' or 'billing'. Respond with one word only."
)
technical_agent = Agent(
name="technical_agent",
model="openai/gpt-4.1",
system_prompt="You are a technical support specialist."
)
billing_agent = Agent(
name="billing_agent",
model="openai/gpt-4.1-mini",
system_prompt="You are a billing support specialist."
)
workflow = (
Workflow(name="support_router")
.step(classifier)
.step(technical_agent,
when=lambda: "technical" in get_run_context().step_span("classifier").output.collect_text().lower())
.step(billing_agent,
when=lambda: "billing" in get_run_context().step_span("classifier").output.collect_text().lower())
)
result = await workflow(prompt="I can't access my account").collect()
The classifier agent decides which handler runs. Only the matching branch executes.