Skip to main content

LLMInferenceParams

@codebolt/types


Interface: LLMInferenceParams

Defined in: common/types/src/codeboltjstypes/libFunctionTypes/llm.ts:87

LLM inference request parameters

Properties

PropertyTypeDescriptionDefined in
full?booleanWhether to return full responsecommon/types/src/codeboltjstypes/libFunctionTypes/llm.ts:95
llmrole?stringThe LLM role to determine which model to usecommon/types/src/codeboltjstypes/libFunctionTypes/llm.ts:97
max_tokens?numberMaximum number of tokens to generatecommon/types/src/codeboltjstypes/libFunctionTypes/llm.ts:99
messagesMessageObject[]Array of messages in the conversationcommon/types/src/codeboltjstypes/libFunctionTypes/llm.ts:89
stream?booleanWhether to stream the responsecommon/types/src/codeboltjstypes/libFunctionTypes/llm.ts:103
temperature?numberTemperature for response generationcommon/types/src/codeboltjstypes/libFunctionTypes/llm.ts:101
tool_choice?ToolChoiceHow the model should use toolscommon/types/src/codeboltjstypes/libFunctionTypes/llm.ts:93
tools?Tool[]Available tools for the model to usecommon/types/src/codeboltjstypes/libFunctionTypes/llm.ts:91