Skip to main content

CodeboltAgent

Use CodeboltAgent when you don't need to change any logic in the agentic loop. You provide instructions, call processMessage, and the framework handles everything — context assembly, LLM calls, tool execution, compaction, and error recovery.

If you need to customize the loop — add processors, swap modifiers, inject pre/post hooks — use the Processor Pattern instead.

Source: packages/agent/src/unified/agent/codeboltAgent.ts

Quick start

import codebolt from '@codebolt/codeboltjs';
import { CodeboltAgent } from '@codebolt/agent/unified';
import { FlatUserMessage } from '@codebolt/types/sdk';

const agent = new CodeboltAgent({
instructions: 'You are a helpful coding assistant.',
});

codebolt.onMessage(async (reqMessage: FlatUserMessage) => {
const result = await agent.processMessage(reqMessage);

if (!result.success) {
throw new Error(result.error ?? 'Agent failed');
}

return result.finalMessage;
});

processMessage

const result = await agent.processMessage(message);
const result = await agent.processMessage(message, context);
ParameterTypeDescription
messagestring | FlatUserMessageThe user message. A plain string is auto-wrapped into a FlatUserMessage
contextProcessedMessage (optional)Continue from a previous conversation state — skips initial prompt generation

Return value

FieldTypeDescription
successbooleanWhether the run completed without errors
resultProcessedMessage | nullThe final conversation state
contextProcessedMessage | nullSame as result — pass to a follow-up call to continue
finalMessagestring | undefinedThe agent's final text response
errorstring | undefinedError message (only when success is false)

Continuing a conversation

const firstResult = await agent.processMessage(reqMessage);

const secondResult = await agent.processMessage(
'Continue from the previous result.',
firstResult.context,
);

Or create a new agent with the context:

const followUpAgent = new CodeboltAgent({
instructions: 'Continue the previous task.',
context: firstResult.context ?? undefined,
});

const secondResult = await followUpAgent.processMessage('What was the outcome?');

Configuration

FieldTypeDefaultDescription
instructionsstring'Based on User Message send reply'System prompt
enableLoggingbooleantrueLog execution events to console
maxTurnsnumber25Maximum LLM turns before the agent throws
allowedToolsstring[]all toolsRestrict available tools by name
contextProcessedMessageResume from a previous conversation
loopDetectionServiceLoopDetectionServiceDetect and break infinite loops
compactionCompactionOrchestratorOptions{}Conversation compaction settings

Need to customize the pipeline? See Processor Pattern.

What it does under the hood

Each turn:

  1. Compaction — compresses the conversation if it's getting long.
  2. Tool refresh — re-fetches available tools from MCP servers.
  3. LLM call — sends the prompt, gets back text or tool calls.
  4. Tool execution — runs any tool calls the LLM requested.
  5. Check — if the LLM produced a final answer, return. Otherwise, next turn.

If the LLM hits a token limit, the agent automatically tries reactive compaction and retries.

Default message modifiers

When you don't customize processors.messageModifiers, these run automatically:

  1. ChatHistoryMessageModifier — prior thread history
  2. EnvironmentContextModifier — date, platform, workspace path
  3. DirectoryContextModifier — workspace file tree
  4. IdeContextModifier — active file, open files, cursor, selection
  5. CoreSystemPromptModifier — your instructions
  6. ToolInjectionModifier — available tools from Codebolt + MCP servers
  7. AtFileProcessorModifier — resolves @file mentions

To customize the pipeline, see Processor Pattern.

createCodeboltAgent helper

Maps systemPrompt to instructions:

import { createCodeboltAgent } from '@codebolt/agent/unified';

const agent = createCodeboltAgent({
systemPrompt: 'You are a helpful assistant.',
maxTurns: 20,
});

See also