LLM Configuration API
llm_ie.engines.BasicLLMConfig
Bases: LLMConfig
The basic LLM configuration for most non-reasoning models.
Source code in package/llm-ie/src/llm_ie/engines.py
preprocess_messages
This method preprocesses the input messages before passing them to the LLM.
Parameters:
messages : List[Dict[str,str]] a list of dict with role and content. role must be one of {"system", "user", "assistant"}
Returns:
messages : List[Dict[str,str]] a list of dict with role and content. role must be one of {"system", "user", "assistant"}
Source code in package/llm-ie/src/llm_ie/engines.py
postprocess_response
postprocess_response(
response: Union[str, Generator[str, None, None]],
) -> Union[str, Generator[Dict[str, str], None, None]]
This method postprocesses the LLM response after it is generated.
Parameters:
response : Union[str, Generator[str, None, None]] the LLM response. Can be a string or a generator.
Returns: Union[str, Generator[Dict[str, str], None, None]]
the postprocessed LLM response.
if input is a generator, the output will be a generator {"data":
Source code in package/llm-ie/src/llm_ie/engines.py
llm_ie.engines.OpenAIReasoningLLMConfig
Bases: LLMConfig
The OpenAI "o" series configuration. 1. The reasoning effort is set to "low" by default. 2. The temperature parameter is not supported and will be ignored. 3. The system prompt is not supported and will be concatenated to the next user prompt.
Parameters:
reasoning_effort : str, Optional the reasoning effort. Must be one of {"low", "medium", "high"}. Default is "low".
Source code in package/llm-ie/src/llm_ie/engines.py
preprocess_messages
Concatenate system prompts to the next user prompt.
Parameters:
messages : List[Dict[str,str]] a list of dict with role and content. role must be one of {"system", "user", "assistant"}
Returns:
messages : List[Dict[str,str]] a list of dict with role and content. role must be one of {"system", "user", "assistant"}
Source code in package/llm-ie/src/llm_ie/engines.py
postprocess_response
postprocess_response(
response: Union[str, Generator[str, None, None]],
) -> Union[str, Generator[Dict[str, str], None, None]]
This method postprocesses the LLM response after it is generated.
Parameters:
response : Union[str, Generator[str, None, None]] the LLM response. Can be a string or a generator.
Returns: Union[str, Generator[Dict[str, str], None, None]]
the postprocessed LLM response.
if input is a generator, the output will be a generator {"type": "response", "data":
Source code in package/llm-ie/src/llm_ie/engines.py
llm_ie.engines.Qwen3LLMConfig
Bases: LLMConfig
The Qwen3 LLM configuration for reasoning models.
Parameters:
thinking_mode : bool, Optional if True, a special token "/think" will be placed after each system and user prompt. Otherwise, "/no_think" will be placed.
Source code in package/llm-ie/src/llm_ie/engines.py
preprocess_messages
Append a special token to the system and user prompts. The token is "/think" if thinking_mode is True, otherwise "/no_think".
Parameters:
messages : List[Dict[str,str]] a list of dict with role and content. role must be one of {"system", "user", "assistant"}
Returns:
messages : List[Dict[str,str]] a list of dict with role and content. role must be one of {"system", "user", "assistant"}
Source code in package/llm-ie/src/llm_ie/engines.py
postprocess_response
postprocess_response(
response: Union[str, Generator[str, None, None]],
) -> Union[str, Generator[Dict[str, str], None, None]]
If input is a generator, tag contents in
Parameters:
response : Union[str, Generator[str, None, None]] the LLM response. Can be a string or a generator.
Returns:
response : Union[str, Generator[str, None, None]]
the postprocessed LLM response.
if input is a generator, the output will be a generator {"type":