What are Agents?
Agents are autonomous execution units that orchestrate LLM interactions with tool calling.
Without tools, an agent functions as a basic LLM. The simplest agent requires just a name and model:
from timbal import Agent
agent = Agent(
name="my_agent",
model="openai/gpt-5"
) # That's it! You've created your first agent!
Model Providers
You can specify any model using the “provider/model” format. See all supported models in Model Capabilities.
Some models require specific parameters (like max_tokens for Claude). Use model_params to pass any additional model configuration:
agent = Agent(
name="claude_agent",
model="anthropic/claude-sonnet-4-latest",
model_params={
"max_tokens": 1024
}
)
Note: Make sure to define all required environment variables—such as the API key model that you need—in your .env file.
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_claude_api_key
Define tools as Python functions - the framework handles schema generation, parameter validation, and execution orchestration.
Running Agents
Execute agents by calling them with a prompt parameter and using .collect() to get the result:
response = await agent(
prompt="What is the capital of Germany?"
).collect()
Streaming Events
For real-time processing, you can stream events as they happen:
async for event in agent(prompt="Hello"):
print(event)
Agents communicate through Message objects - Timbal’s data structure that standardizes both input and output.
from timbal.types.message import Message
response = await agent(
prompt=Message.validate("What is the capital of Germany?")
).collect()
Agents accept multiple input formats, automatically converting them to Message objects:
# String
response = await agent(
prompt="What's the weather?"
).collect()
# File - Timbal type
response = await agent(
prompt=File.validate("image.png")
).collect()
# List
response = await agent(
prompt=["Describe this image", File.validate("image.png")]
).collect()
You can pass custom input parameters when calling an agent. Access these values anywhere in your agent (tools, system prompts, hooks) using get_run_context().current_span().input:
from timbal import Agent, Tool
from timbal.state import get_run_context
def get_user_info():
span = get_run_context().current_span()
user_id = span.input.get("user_id")
role = span.input.get("role")
return f"User {user_id} with role {role}"
agent = Agent(
name="user_agent",
model="openai/gpt-4o-mini",
tools=[Tool(handler=get_user_info)]
)
# Pass custom inputs
response = await agent(
prompt="Who am I?",
user_id="123",
role="admin"
).collect()
The tool can access user_id and role from the input parameters. Input parameters work with both .collect() and streaming.
Overriding Model Configuration
What if you want to change the model, max_tokens, or thinking for each run? Instead of creating multiple agents, you can pass these as input parameters. This is useful for A/B testing different models, adjusting token limits per request, or dynamically selecting models based on task complexity.
# Agent with default values
agent = Agent(
name="my_agent",
model="openai/gpt-4o-mini", # Default model
model_params={"max_tokens": 1024} # Default max_tokens
)
# Override model: was "openai/gpt-4o-mini", now "anthropic/claude-sonnet-4-latest"
response = await agent(
prompt="What is the capital of Germany?",
model="anthropic/claude-sonnet-4-latest"
).collect()
# Override max_tokens: was 1024, now 2048
async for event in agent(
prompt="What is the capital of Germany?",
max_tokens=2048
):
print(event)
# Override thinking (provider-specific format)
# For OpenAI: {"effort": "high", "summary": true}
response = await agent(
prompt="Solve this complex problem",
model="openai/gpt-4o",
thinking={"effort": "high", "summary": True}
).collect()
# For Anthropic: {"budget_tokens": 10000}
response = await agent(
prompt="Solve this complex problem",
model="anthropic/claude-sonnet-4-latest",
thinking={
"type": "enabled",
"budget_tokens": 10000}
).collect()
The parameter names model, max_tokens, and thinking are reserved and will affect model configuration when passed as input. These parameters will not be available as regular input to your agent. If you need to pass custom data without changing the actual model configuration, use different parameter names (e.g., data_model instead of model if you want to pass a data model name).
Output
Calling .collect() returns an OutputEvent containing the agent’s response. Access the Message via the .output property:
result = await agent(prompt="What's 2+2?").collect()
# result is an OutputEvent
print(result.output)
# Message(role=assistant, content=[TextContent(type='text', text='2 + 2 = 4.')])
# Access the message text
print(result.output.content[0].text)
# "2 + 2 = 4."
Messages
Messages are the structured data format that agents use to communicate. They contain a role and content, with automatic handling of different content types and provider compatibility.
from timbal.types.message import Message
Messages contain a role and content:
Role Types:
- user - Messages from the user
- assistant - Messages from the AI agent
- system - System instructions and context
- tool - Tool execution results
Content Types:
- TextContent - Plain text messages
- FileContent - Files like PDFs, images, documents
- ToolCallContent - Function calls to tools
- ToolResultContent - Results from tool executions
Messages can contain different types of content - text, files, tool calls, and tool results. The framework automatically handles complex content structures:
from timbal.types.content import FileContent, TextContent
# Message with text and file
mixed_message = Message(
role="user",
content=[
TextContent(text="Analyze this document:"),
FileContent(path=File.validate("report.pdf"))
]
)
The same message above can be created easily using Message.validate():
message = Message.validate([
"Summarize this document:",
File.validate("quarterly_report.pdf")
])
Files
Agents can process files directly through the message content system. The framework automatically handles file reading, content extraction, and formatting for the AI model.
from timbal.types.file import File
The framework supports common document and media formats:
- Text files (.txt, .md) - Direct content inclusion
- PDFs (.pdf) - Text extraction with structure preservation
- Images (.png, .jpg, .gif) - Visual analysis through vision-capable models
- Spreadsheets (.xlsx, .csv) - Structured data representation
- Documents (.docx) - Text and formatting extraction
Files are automatically converted to Timbal File objects using File.validate():
file = File.validate("quarterly_report.pdf")