xinference.client.handlers.ChatModelHandle.chat#

ChatModelHandle.chat(messages: List[Dict], tools: List[Dict] | None = None, generate_config: LlamaCppGenerateConfig | PytorchGenerateConfig | None = None) ChatCompletion | Iterator[ChatCompletionChunk]#

Given a list of messages comprising a conversation, the model will return a response via RESTful APIs.

Parameters:
  • messages (List[Dict]) – A list of messages comprising the conversation so far.

  • tools (Optional[List[Dict]]) – A tool list.

  • generate_config (Optional[Union["LlamaCppGenerateConfig", "PytorchGenerateConfig"]]) – Additional configuration for the chat generation. “LlamaCppGenerateConfig” -> configuration for llama-cpp-python model “PytorchGenerateConfig” -> configuration for pytorch model

Returns:

Stream is a parameter in generate_config. When stream is set to True, the function will return Iterator[“ChatCompletionChunk”]. When stream is set to False, the function will return “ChatCompletion”.

Return type:

Union[“ChatCompletion”, Iterator[“ChatCompletionChunk”]]

Raises:

RuntimeError – Report the failure to generate the chat from the server. Detailed information provided in error message.