Skip to main content

Llm API

LLM API - language model operations

import { CodeBoltClient } from '@codebolt/clientsdk';

const client = new CodeBoltClient();

Quick Reference

MethodDescription
cancelDownloadCancels an in-progress model download.
deleteLocalModelDeletes a locally downloaded model from disk.
downloadModelInitiates the download of a model for local use.
getAllModelsRetrieves all models across all configured providers.
getDownloadedLocalModelsRetrieves the list of models that have been downloaded for local execution.
getDownloadStatusChecks the current download status of a model.
getEmbeddingSupportedLLMsRetrieves LLM providers that support text embedding capabilities.
getLocalAgentRetrieves the LLM configuration for a specific local agent.
getModelsRetrieves the available models for a specific LLM provider.
getPricingFetches the current pricing information for all LLM models.
getProvidersRetrieves all configured LLM providers with their current status and available models.
setDefaultSets the default LLM model and provider for the workspace.
setLocalAgentConfigures the LLM settings for a specific local agent.
updateKeyUpdates the API key for a specific LLM provider.
updatePricingToLocalSyncs the latest LLM pricing information to local storage.

Methods


cancelDownload

client.llm.cancelDownload(data: LLMCancelDownloadRequest): Promise<unknown>

Cancels an in-progress model download.

Stops the download of a model that was previously initiated with . Any partially downloaded data may be cleaned up.

ParameterTypeRequiredDescription
dataLLMCancelDownloadRequestYesThe cancellation request identifying the download to stop

Returns: Promise<unknown> — A promise that resolves when the cancellation has been processed

Full reference →


deleteLocalModel

client.llm.deleteLocalModel(modelId: string): Promise<unknown>

Deletes a locally downloaded model from disk.

Permanently removes a model that was previously downloaded for local execution, freeing up disk space. The model can be re-downloaded later if needed.

ParameterTypeRequiredDescription
modelIdstringYesThe identifier of the local model to delete

Returns: Promise<unknown> — A promise that resolves when the model has been deleted

Full reference →


downloadModel

client.llm.downloadModel(data: LLMDownloadModelRequest): Promise<unknown>

Initiates the download of a model for local use.

Starts an asynchronous download of a model that can be run locally (e.g., via Ollama). Use to monitor progress.

ParameterTypeRequiredDescription
dataLLMDownloadModelRequestYesThe download request specifying which model to download

Returns: Promise<unknown> — A promise that resolves when the download has been initiated

Full reference →


getAllModels

client.llm.getAllModels(data?: Record<string, unknown>): Promise<LLMModel[]>

Retrieves all models across all configured providers.

Returns a flat list of every available model from all providers. Optionally accepts filter criteria to narrow results by capability, provider, or other attributes.

ParameterTypeRequiredDescription
dataRecord<string, unknown>NoOptional filter criteria for narrowing the model list

Returns: Promise<LLMModel[]> — A promise that resolves to an array of all available LLM models

Full reference →


getDownloadedLocalModels

client.llm.getDownloadedLocalModels(): Promise<LLMModel[]>

Retrieves the list of models that have been downloaded for local execution.

Returns all models currently available on disk for local inference, as opposed to cloud-hosted models that require API calls.

No parameters.

Returns: Promise<LLMModel[]> — A promise that resolves to an array of locally downloaded models

Full reference →


getDownloadStatus

client.llm.getDownloadStatus(modelId: string): Promise<LLMDownloadStatus>

Checks the current download status of a model.

Returns progress information for an active or completed model download, including percentage complete, bytes downloaded, and any error state.

ParameterTypeRequiredDescription
modelIdstringYesThe identifier of the model to check

Returns: Promise<LLMDownloadStatus> — A promise that resolves to the current download status

Full reference →


getEmbeddingSupportedLLMs

client.llm.getEmbeddingSupportedLLMs(): Promise<LLMProvider[]>

Retrieves LLM providers that support text embedding capabilities.

Filters providers to only those offering embedding models, which convert text into numerical vectors for semantic search, similarity matching, and RAG workflows.

No parameters.

Returns: Promise<LLMProvider[]> — A promise that resolves to an array of embedding-capable LLM providers

Full reference →


getLocalAgent

client.llm.getLocalAgent(agentName: string): Promise<LLMLocalAgentConfig>

Retrieves the LLM configuration for a specific local agent.

Returns the provider and model settings that have been configured for the given agent, or the default configuration if no agent-specific override exists.

ParameterTypeRequiredDescription
agentNamestringYesThe name of the agent whose LLM configuration to retrieve

Returns: Promise<LLMLocalAgentConfig> — A promise that resolves to the agent's LLM configuration

Full reference →


getModels

client.llm.getModels(data: LLMGetModelsRequest): Promise<LLMModel[]>

Retrieves the available models for a specific LLM provider.

Queries the models catalog for a given provider, returning all models that can be used with that provider's current configuration and API key.

ParameterTypeRequiredDescription
dataLLMGetModelsRequestYesThe request specifying which provider's models to retrieve

Returns: Promise<LLMModel[]> — A promise that resolves to an array of models available for the provider

Full reference →


getPricing

client.llm.getPricing(): Promise<unknown>

Fetches the current pricing information for all LLM models.

Returns token pricing data (input/output cost per token) for each model across all providers. Useful for cost estimation and budget tracking of LLM usage.

No parameters.

Returns: Promise<unknown> — A promise that resolves to the pricing data for all available models

Full reference →


getProviders

client.llm.getProviders(): Promise<LLMProvider[]>

Retrieves all configured LLM providers with their current status and available models.

Returns the complete list of LLM providers (e.g., OpenAI, Anthropic, Ollama) that have been configured in the system, including whether they have valid API keys set.

No parameters.

Returns: Promise<LLMProvider[]> — A promise that resolves to an array of LLM provider configurations

Full reference →


setDefault

client.llm.setDefault(data: LLMSetDefaultRequest): Promise<unknown>

Sets the default LLM model and provider for the workspace.

Configures which model is used by default when no specific model is requested. This affects all operations that rely on LLM inference without explicit model selection.

ParameterTypeRequiredDescription
dataLLMSetDefaultRequestYesThe default LLM configuration

Returns: Promise<unknown> — A promise that resolves when the default has been updated

Full reference →


setLocalAgent

client.llm.setLocalAgent(data: LLMLocalAgentConfig): Promise<unknown>

Configures the LLM settings for a specific local agent.

Allows overriding the default LLM configuration on a per-agent basis, so different agents can use different models or provider settings tailored to their tasks.

ParameterTypeRequiredDescription
dataLLMLocalAgentConfigYesThe agent-specific LLM configuration

Returns: Promise<unknown> — A promise that resolves when the agent configuration has been saved

Full reference →


updateKey

client.llm.updateKey(data: LLMUpdateKeyRequest): Promise<unknown>

Updates the API key for a specific LLM provider.

Sets or rotates the authentication key used to communicate with an LLM provider's API. The key is stored securely and used for all subsequent requests to that provider.

ParameterTypeRequiredDescription
dataLLMUpdateKeyRequestYesThe key update request

Returns: Promise<unknown> — A promise that resolves when the key has been updated

Full reference →


updatePricingToLocal

client.llm.updatePricingToLocal(): Promise<unknown>

Syncs the latest LLM pricing information to local storage.

Downloads the most up-to-date pricing catalog from the remote source and persists it locally. Call this periodically to ensure cost calculations reflect current rates.

No parameters.

Returns: Promise<unknown> — A promise that resolves when the local pricing data has been updated

Full reference →