Skip to main content

Processor Pattern

Customize the agent loop by plugging processors into CodeboltAgent's five pipeline slots. Every shipped processor lives in @codebolt/agent/processor-pieces — import the ones you need, wire them into the processors config, and the framework runs them at the right point in the loop.

The five slots

┌─ messageModifiers ──────────────────────────┐
│ Shape the prompt before inference │
└───────────────────────────────────────��──────┘

┌─ preInferenceProcessors ────────────────────┐
│ Last-minute prompt changes before LLM call │
└──────────────────────────────────────────────┘

▼ LLM inference

┌─ postInferenceProcessors ───────────────────┐
│ Inspect/annotate the LLM response │
└──────────────────────────────────────────────┘

┌─ preToolCallProcessors ─────────────────────┐
│ Validate or intercept tool calls │
└──────────────────────────────────────────────┘

▼ Tool execution

┌─ postToolCallProcessors ────────────────────┐
│ Compact, process results, decide next step │
└──────────────────────────────────────────────┘

Minimal example

Use CodeboltAgent defaults for message assembly but add compression, loop detection, and compaction:

import { CodeboltAgent } from '@codebolt/agent/unified';
import {
ChatCompressionModifier,
LoopDetectionModifier,
ConversationCompactorModifier,
} from '@codebolt/agent/processor-pieces';

const agent = new CodeboltAgent({
instructions: 'Work carefully through larger coding tasks.',
processors: {
preInferenceProcessors: [
new ChatCompressionModifier({ contextPercentageThreshold: 0.7 }),
],
postInferenceProcessors: [
new LoopDetectionModifier({ maxSimilarMessages: 3 }),
],
postToolCallProcessors: [
new ConversationCompactorModifier({ compactStrategy: 'smart' }),
],
},
});

Because messageModifiers is not set here, CodeboltAgent keeps its full default pipeline. The other three slots are additive — they default to empty, so you're only adding behaviour.

Replacing the default message pipeline

When you set processors.messageModifiers, you replace the entire default pipeline. Include everything you need:

import { CodeboltAgent } from '@codebolt/agent/unified';
import {
ChatHistoryMessageModifier,
EnvironmentContextModifier,
DirectoryContextModifier,
IdeContextModifier,
CoreSystemPromptModifier,
ToolInjectionModifier,
AtFileProcessorModifier,
ContextAssemblyModifier,
} from '@codebolt/agent/processor-pieces';

const agent = new CodeboltAgent({
instructions: 'A memory-aware assistant.',
processors: {
messageModifiers: [
new ChatHistoryMessageModifier({ enableChatHistory: true }),
new EnvironmentContextModifier({ enableFullContext: false }),
new DirectoryContextModifier(),
new IdeContextModifier({
includeActiveFile: true,
includeOpenFiles: true,
includeCursorPosition: true,
includeSelectedText: true,
}),
new CoreSystemPromptModifier({ customSystemPrompt: 'You are a memory-aware assistant.' }),
new ToolInjectionModifier({ includeToolDescriptions: true }),
new AtFileProcessorModifier({ enableRecursiveSearch: true }),
// Added: context assembly for memory integration
new ContextAssemblyModifier({
scope: 'workspace',
includeMemory: true,
}),
],
},
});

All shipped processors

Message modifiers

ClassWhat it does
ChatHistoryMessageModifierPrepends prior thread history; injects synthetic tool-response messages for unresolved tool calls
EnvironmentContextModifierAdds date, platform, workspace path, directory listing
DirectoryContextModifierAdds workspace file tree honoring .gitignore
IdeContextModifierAdds active file, open files, cursor position, selected text
CoreSystemPromptModifierSets the system prompt
ToolInjectionModifierFetches and injects tools from Codebolt and MCP servers
AtFileProcessorModifierResolves @file mentions and appends file contents
ArgumentProcessorModifierAppends invocation metadata from createdMessage.metadata.invocation
MemoryImportModifierReplaces @path file references with file contents
ChatRecordingModifierRecords prompts to .chat-recordings in jsonl or markdown
ContextAssemblyModifierCalls codebolt.contextAssembly.getContext() and injects memory context
RuleBasedContextModifierEvaluates context rules, fetches only included/forced memories
MemoryTypeContextModifierFetches specific memory types by name

Pre-inference processors

ClassWhat it does
ChatCompressionModifierSummarizes older history when the prompt crosses a token threshold

Post-inference processors

ClassWhat it does
LoopDetectionModifierTracks message similarity; injects a system warning when repetition is detected

Pre-tool-call processors

ClassWhat it does
ToolValidationModifierRecords validation metadata (core logic is TODO)
ToolParameterModifierRecords parameter metadata (core logic is TODO)

Post-tool-call processors

ClassWhat it does
ShellProcessorModifierReplaces {{args}} placeholders and optionally executes !{...} shell injections (disabled by default)
ConversationCompactorModifierTruncates oversized tool output, deduplicates file reads, compresses history using simple, smart, or summarize strategies

Ordering

Within each slot, processors run in array order:

processors: {
messageModifiers: [
new DirectoryContextModifier(), // runs first
new AtFileProcessorModifier(), // runs second
],
}

Which slots can stop the loop

Only tool-call processors can return shouldExit: true:

  • preToolCallProcessors
  • postToolCallProcessors

Message, pre-inference, and post-inference processors always return a ProcessedMessage and cannot halt the loop.

See also