Skip to main content

Introduction to Timbal

Build, orchestrate, and deploy AI solutions with ease.


What is Timbal?

Timbal is an open-source framework for designing autonomous AI applications with flexible orchestration, real-time streaming, and multi-provider support.

Timbal empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI applications tailored to any scenario.

Timbal provides two main patterns for building AI applications:

  • Agentic (Agent-based): Agents are autonomous execution units that orchestrate LLM interactions with tool calling. Define tools as Python functions - the framework handles schema generation, parameter validation, and execution orchestration. Best for:

    • Complex reasoning tasks
    • Dynamic tool selection
    • Open-ended problem solving
    from timbal import Agent
    agent = Agent(
    name="my_agent",
    model="openai/gpt-4.1",
    tools=[search_web],
    )
  • Workflow (Flow-based): Fine-grained control over your AI pipeline with explicit step-by-step execution. Ideal for:

    • Predictable processes
    • Strict control requirements
    • Performance-critical applications
    from timbal import Workflow
    # Document Database Creation Flow
    workflow = (WorkFlow(name="my_workflow")
    .step(extract_text)
    .step(text_processor, raw_text=extract_text.output)
    .step(convert_to_database, text=text_processor.output", chunk_size=1000)
    )

    You can create Workflows and compose them together to create complex applications.

Choose the pattern that best aligns with your application's requirements for control, flexibility, and predictability.

Key Features

Flow-based Orchestration

Build complex applications by composing LLM calls and tools into reusable flows.

Flexible Tools

Equip agents with custom tools and APIs to interact with external services.

Streaming Support

Real-time streaming responses and updates from LLM interactions

Task Management

Define sequential or parallel workflows with DAG architecture, automatically handling task dependencies and execution order.

Memory Management

Built-in state persistence and memory handling across interactions for contextual awareness and continuity.

Multi-Provider Support

Seamlessly integrate with leading LLM providers including OpenAI, Anthropic, Gemini, and TogetherAI.


Why are we building this?

  • Simplicity over complexity. Unlike LangGraph, CrewAI, and other frameworks that abstract away what's really happening, Timbal keeps things transparent. Under the hood, it's just LLM calls and async function execution - no hidden magic, no opaque abstractions.

  • Developer experience first. We believe you shouldn't need to learn a new mental model to build AI applications. If you understand functions and async/await from any modern programming language, you understand Timbal. Most agents are built in 10-20 lines of code.

  • Battle-tested architecture. Our core abstractions have been refined through real-world production usage. The framework is designed around proven patterns: async generators, Pydantic validation, and event-driven processing.

  • Robust interfaces. Strong typing and validation make it nearly impossible to break things from the outside, while clean abstractions make internal modifications straightforward. The framework fails fast with clear error messages.

  • Performance by design. Built for production workloads with concurrent execution, efficient memory management, and minimal overhead. Every design decision prioritizes speed and scalability.

  • Stability in a chaotic ecosystem. Providers change APIs monthly. Timbal provides a stable abstraction that shields your applications in production.