LLMInferenceParams
Interface: LLMInferenceParams
Defined in: common/types/src/codeboltjstypes/libFunctionTypes/llm.ts:87
LLM inference request parameters
Properties
| Property | Type | Description | Defined in |
|---|---|---|---|
full? | boolean | Whether to return full response | common/types/src/codeboltjstypes/libFunctionTypes/llm.ts:95 |
llmrole? | string | The LLM role to determine which model to use | common/types/src/codeboltjstypes/libFunctionTypes/llm.ts:97 |
max_tokens? | number | Maximum number of tokens to generate | common/types/src/codeboltjstypes/libFunctionTypes/llm.ts:99 |
messages | MessageObject[] | Array of messages in the conversation | common/types/src/codeboltjstypes/libFunctionTypes/llm.ts:89 |
stream? | boolean | Whether to stream the response | common/types/src/codeboltjstypes/libFunctionTypes/llm.ts:103 |
temperature? | number | Temperature for response generation | common/types/src/codeboltjstypes/libFunctionTypes/llm.ts:101 |
tool_choice? | ToolChoice | How the model should use tools | common/types/src/codeboltjstypes/libFunctionTypes/llm.ts:93 |
tools? | Tool[] | Available tools for the model to use | common/types/src/codeboltjstypes/libFunctionTypes/llm.ts:91 |