Level 2 — codeboltjs
Build an agent directly on the @codebolt/codeboltjs SDK, without the framework wrapper. You write the loop yourself. You get total control and no assistance.
When level 2 is the right choice
Very rarely. Signs you actually need it:
- You're building infrastructure, not an agent. A test runner, a debugger, a custom IDE adapter, a batch migration tool.
- Your loop shape doesn't match any pattern and can't be expressed as a composition of framework primitives.
- You need low-level APIs the framework intentionally doesn't expose — raw event stream subscriptions, direct channel access, custom IPC.
- Performance matters more than ergonomics — you're running thousands of agents and the framework's per-call overhead is measurable.
If none of these apply, level 1 is the right choice. 99% of custom agents should live at level 1.
What you lose by going to level 2
The framework was doing work for you. At level 2, you inherit responsibility for:
| Responsibility | You now own |
|---|---|
| Agent loop | Writing the deliberate/execute/reflect cycle yourself |
| Heartbeats | Emitting them so HeartbeatManager doesn't kill you |
| Phase tracking | Telling the server what phase you're in |
| Event log writes | Emitting events with causal parents |
| Memory wiring | Deciding what goes into episodic / persistent / etc. |
| Context assembly | Calling contextAssemblyService yourself |
| Error recovery | Structured error emission with run context |
| Replay support | Ensuring your agent is deterministic given the same event log |
It's possible to get these wrong, and the failures are subtle. If you're not sure you need level 2, you don't.
What @codebolt/codeboltjs gives you
@codebolt/codeboltjs is the client SDK for the server. It exports a singleton codebolt (the default export). Level-2 agents use it directly. The main surfaces:
const codebolt = require('@codebolt/codeboltjs').default;
// LLM access
const { completion } = await codebolt.llm.inference({ messages, full: true, tools: availableTools });
// File system
const content = await codebolt.fs.readFile(path);
// Git operations
const status = await codebolt.git.git_status();
// MCP / tool access
const [didUserReject, result] = await codebolt.mcp.executeTool('codebolt', 'read_file', args);
// Chat operations
await codebolt.chat.sendMessage(message, {});
// State management
await codebolt.cbstate.set(key, value);
const val = await codebolt.cbstate.get(key);
See Reference → SDKs → codeboltjs for the full API.
A minimal level-2 agent
import codebolt from '@codebolt/codeboltjs';
import { FlatUserMessage } from '@codebolt/types/sdk';
const systemPrompt = 'You are a helpful coding assistant.';
codebolt.onMessage(async (reqMessage: FlatUserMessage) => {
const toolsResponse: any = await codebolt.mcp.listMcpFromServers(['codebolt']);
const availableTools = toolsResponse?.data?.tools || toolsResponse?.data || [];
const conversationMessages: any[] = [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: reqMessage.userMessage ?? '' },
];
while (true) {
const { completion } = await codebolt.llm.inference({
messages: conversationMessages,
full: true,
tools: availableTools,
});
const assistantMessage = completion?.choices?.[0]?.message;
if (!assistantMessage) {
throw new Error('LLM did not return a message.');
}
conversationMessages.push(assistantMessage);
if (!assistantMessage.tool_calls?.length) {
const finalReply = assistantMessage.content ?? '';
codebolt.chat.sendMessage(finalReply, {});
return finalReply;
}
for (const toolCall of assistantMessage.tool_calls) {
const toolArguments = JSON.parse(toolCall.function.arguments || '{}');
const [didUserReject, toolResult] = await codebolt.mcp.executeTool(
'codebolt',
toolCall.function.name,
toolArguments
);
conversationMessages.push({
role: 'tool',
tool_call_id: toolCall.id,
content: didUserReject
? 'User rejected the tool execution.'
: JSON.stringify(toolResult),
});
}
}
});
This is deliberately lower-level than level 1:
- You build the message list yourself.
- You decide which tools to expose to the model.
- You call
codebolt.llm.inference(...)yourself. - You inspect
tool_callsyourself. - You execute tools and append tool results back into the transcript yourself.
The example above assumes built-in codebolt tools for simplicity. Once you need multiple tool servers, custom routing, retries, or compaction, you're rebuilding framework behavior by hand.