Skip to main content

What are Workflows?

Workflows are programmable execution pipelines that orchestrate step-by-step processing with explicit control flow.
from timbal import Workflow

workflow = Workflow(name="my_workflow")

Building Blocks of a Workflow: Steps

Steps are the core units of work, which can process data, perform actions and pass results onward.

Adding Steps to the Workflow

Use .step() to add steps to workflows. Any Runnable is valid to create a step.
from timbal import Workflow
from timbal.state import get_run_context

def celsius_to_fahrenheit(celsius: float) -> float:
    return (celsius * 9/5) + 32

def format_result(temperature: float) -> str:
    return f"Temperature: {temperature}°F"

workflow = (
    Workflow(name="temperature_converter")
    .step(celsius_to_fahrenheit, celsius=35)
    .step(
        format_result,
        temperature=lambda: get_run_context().step_span("celsius_to_fahrenheit").output
    )
)
Functions used as workflow steps must accept and return Pydantic-serializable types (e.g., str, int, float, bool, dict, list, BaseModel). Custom classes that aren’t Pydantic models cannot be passed between steps.

Step Names and Reusing Functions

Each step in a workflow must have a unique name. Like all Runnables, steps are identified by their names, which must be distinct within the workflow. To use the same function multiple times in a workflow, wrap it in a new Tool with a distinct name for each usage:
threshold_high = Tool(name="threshold_high", handler=check_threshold)
threshold_low = Tool(name="threshold_low", handler=check_threshold)

workflow = (
    Workflow(name="monitoring")
    .step(threshold_high, value=80, limit=100)
    .step(threshold_low, value=80, limit=50)
)

Step Context

The input and output from each step is stored in its Run Context, accessible via get_run_context().current_span(). You can also store custom variables and access data from sibling steps. See Control Flow for details and Context & State Management for the full reference.

Running the Workflow

The workflow’s final output is determined by the last executed step in the dependency graph. To run the workflow and receive directly the final output, use the collect() method:
result = await workflow().collect()
print(result.output)
Or stream all events as they happen:
async for event in workflow():
    print(event)

Workflow Output

When you run a workflow, only the last executed step’s output is returned. If you need outputs from multiple steps, create a final step that receives all of them and combines them:
workflow = (
    Workflow(name="pipeline")
    .step(step1)
    .step(step2)
    .step(step3,
        users=lambda: get_run_context().step_span("step1").output,
        orders=lambda: get_run_context().step_span("step2").output)
)
# Only step3's output is returned, which includes both users and orders

Composition

Workflows can contain other workflows as steps:
# Inner workflow
data_pipeline = (
    Workflow(name="data_pipeline")
    .step(fetch_data, source="api")
    .step(clean_data,
        raw=lambda: get_run_context().step_span("fetch_data").output)
)

# Outer workflow uses the inner workflow as a step
report_pipeline = (
    Workflow(name="report_pipeline")
    .step(data_pipeline)
    .step(generate_report,
        data=lambda: get_run_context().step_span("data_pipeline").output)
    .step(send_email,
        report=lambda: get_run_context().step_span("generate_report").output)
)
The inner workflow runs as a single step. Its final output becomes the step output accessible by subsequent steps.

Integrating LLMs

Using Agents

Best for autonomous AI that needs to reason, use tools, and maintain conversation context:
agent = Agent(
    name="agent",
    model="openai/gpt-5.2"
)

workflow = (
    Workflow(name="text_processor")
    .step(agent, prompt="what is the weather in london")
)

Key Features

  • Composition: Workflows can contain other workflows
  • Parallel Execution: Independent steps run concurrently
  • Automatic Dependencies: Lambda parameters create implicit execution order
  • Type Safety: Automatic parameter validation via Pydantic
  • Error Isolation: Failed steps skip dependents, others continue