Llmllm inference - Sends an inference request to the LLM and returns the model's response. The model is selected based on the provided