Context and Memory
The single most important question for an agent: what does it know right now? That answer is built from many sources, every turn, by the context assembly pipeline.
Context vs memory
- Context is the bag of stuff passed to the LLM for this turn. Bounded, ephemeral.
- Memory is the persistent substrate the agent draws from. Unbounded, durable.
Context is assembled fresh each turn from memory + the current task + recent history.
The memory layers
Codebolt has several memory layers, each with a different access pattern and lifetime:
| Layer | Lifetime | Use |
|---|---|---|
| Working | Single turn | Scratchpad for the current LLM call |
| Episodic | Single agent run | "What I did and what happened" — turn history |
| Persistent KV | Forever, cheap | Small key→value (user prefs, flags) |
| Persistent JSON | Forever, structured | Bigger structured records |
| Markdown notes | Forever, human-editable | Long-form notes the human and agent share |
| Knowledge graph (Kuzu) | Forever, queryable | Entities and relationships |
| Vector store | Forever, semantic | Embeddings for similarity search |
Different memories suit different jobs. A user preference goes in KV. A code symbol graph goes in the KG. A recallable past conversation goes in vector + episodic.
Context assembly
Each turn, the assembly pipeline:
- Loads the system prompt (and any capability fragments).
- Pulls relevant memory based on the current task and context rules.
- Adds recent turns from episodic memory.
- Runs processors — compaction, redaction, reranking, loop-detection.
- Hands the result to the LLM.
Context rules let you express things like "when the task mentions a file, include that file's symbols from the KG" or "never include test files when refactoring production code".
Why this matters
Bad context = bad answers. A confused agent is almost always a context problem (too much, too little, or the wrong stuff). The fix is rarely the prompt; it's usually the context rules.