Skip to main content

inference

codebolt.llm.inference(message: object, llmrole: string): Promise<LLMResponse>
Sends an inference request to the LLM and returns the model's response. The model is selected based on the provided

Parameters

NameTypeDescription
messageobjectThe input message or prompt to be sent to the LLM.
llmrolestringThe role of the LLM to determine which model to use

Returns:

 Promise<LLMResponse>
A promise that resolves with the LLM's response.

Example

js 

let message={
messages:[{
"role":"system",
"content":"you are developer agent expert in writing code"
},{
"role":"user",
"content":"crete node js project"
}],
tools:[],
tool_choice: "auto",//if useing any tools
}


const response = codebolt.llm.inference(message);
console.log(response);

Explaination

The codebolt.llm.inference function allows you to send an inference request to a Large Language Model (LLM) and retrieves the model's response. It has two parameter:

question (string): This parameter represents the input question or prompt you want to send to the LLM for inference.

llmRole (string): This parameter specifies the role or type of Large Language Model (LLM) you want to use for inference. The role determines which variant of the LLM is selected for processing the input question and generating the response. LLMs role can be optional.