Memory in Timbal agents enables multi-turn conversations and context persistence across agent interactions. Agents automatically maintain conversation history without requiring additional configuration.Documentation Index
Fetch the complete documentation index at: https://docs.timbal.ai/llms.txt
Use this file to discover all available pages before exploring further.
How Memory Works
Timbal implements memory through its tracing system, which captures conversation history during agent execution. When an agent runs, it automatically resolves memory from previous interactions to maintain conversational context.Automatic Memory Resolution
During each agent execution:- The agent checks for previous conversation context
- If found, it retrieves conversation history from the tracing data
- Previous messages are automatically included in the current conversation
- The agent processes the new input with full conversation context
Storage Options
Memory storage depends on your deployment environment:- Timbal Platform: Conversation history is automatically persisted with high availability and cross-instance sharing
- Local Development: Uses in-memory storage that’s fast but cleared on restart
Nested Agent Memory
When agents call other agents, memory flows seamlessly between them. Child agents automatically receive conversation context from their parent, enabling sophisticated multi-agent workflows while maintaining conversation continuity.Configuration
Memory is automatically configured based on your deployment:- Platform Deployment: No configuration needed
- Self-Hosted: Uses in-memory storage by default, with platform integration available
Rewind
Timbal supports conversation branching, allowing you to rewind to any previous point and explore alternative conversation paths.How Branching Works
Each agent interaction creates a unique run with its own context. You can branch from any previous run by referencing itsrun_id, creating independent conversation paths that diverge from that point.
Common Use Cases
- A/B Testing: Compare different conversation strategies
- Error Recovery: Return to a state before an error occurred
- Debugging: Isolate specific conversation states for testing
- Exploration: Test “what if” scenarios without affecting the main conversation
Each branch maintains independent memory from the branching point. Learn more about the underlying mechanisms in Context.
Memory Compaction
For long-running conversations, memory can grow large enough to exceed the model’s context window. Timbal provides built-in memory compaction strategies — such as keeping only the last N turns, truncating tool results, or summarizing older messages with an LLM — that fire automatically when context utilization crosses a configurable threshold.Memory compaction is a beta feature. See Memory Compaction for the full reference.