Engines API
llm_ie.engines.InferenceEngine
This is an abstract class to provide interfaces for LLM inference engines. Children classes that inherts this class can be used in extrators. Must implement chat() method.
Parameters:
config : LLMConfig the LLM configuration. Must be a child class of LLMConfig.
Source code in package/llm-ie/src/llm_ie/engines.py
chat
abstractmethod
chat(
messages: List[Dict[str, str]],
verbose: bool = False,
stream: bool = False,
messages_logger: MessagesLogger = None,
) -> Union[
Dict[str, str], Generator[Dict[str, str], None, None]
]
This method inputs chat messages and outputs LLM generated text.
Parameters:
messages : List[Dict[str,str]]
a list of dict with role and content. role must be one of {"system", "user", "assistant"}
verbose : bool, Optional
if True, LLM generated text will be printed in terminal in real-time.
stream : bool, Optional
if True, returns a generator that yields the output in real-time.
Messages_logger : MessagesLogger, Optional
the message logger that logs the chat messages.
Returns:
response : Union[Dict[str,str], Generator[Dict[str, str], None, None]]
a dict {"reasoning":
Source code in package/llm-ie/src/llm_ie/engines.py
llm_ie.engines.OllamaInferenceEngine
OllamaInferenceEngine(
model_name: str,
num_ctx: int = 4096,
keep_alive: int = 300,
config: LLMConfig = None,
**kwrs
)
Bases: InferenceEngine
The Ollama inference engine.
Parameters:
model_name : str the model name exactly as shown in >> ollama ls num_ctx : int, Optional context length that LLM will evaluate. keep_alive : int, Optional seconds to hold the LLM after the last API call. config : LLMConfig the LLM configuration.
Source code in package/llm-ie/src/llm_ie/engines.py
chat
chat(
messages: List[Dict[str, str]],
verbose: bool = False,
stream: bool = False,
messages_logger: MessagesLogger = None,
) -> Union[
Dict[str, str], Generator[Dict[str, str], None, None]
]
This method inputs chat messages and outputs VLM generated text.
Parameters:
messages : List[Dict[str,str]] a list of dict with role and content. role must be one of {"system", "user", "assistant"} verbose : bool, Optional if True, VLM generated text will be printed in terminal in real-time. stream : bool, Optional if True, returns a generator that yields the output in real-time. Messages_logger : MessagesLogger, Optional the message logger that logs the chat messages.
Returns:
response : Union[Dict[str,str], Generator[Dict[str, str], None, None]]
a dict {"reasoning":
Source code in package/llm-ie/src/llm_ie/engines.py
535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 | |
chat_async
async
chat_async(
messages: List[Dict[str, str]],
messages_logger: MessagesLogger = None,
) -> Dict[str, str]
Async version of chat method. Streaming is not supported.
Source code in package/llm-ie/src/llm_ie/engines.py
llm_ie.engines.OpenAICompatibleInferenceEngine
OpenAICompatibleInferenceEngine(
model: str,
api_key: str,
base_url: str,
config: LLMConfig = None,
**kwrs
)
Bases: InferenceEngine
General OpenAI-compatible server inference engine. https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html
For parameters and documentation, refer to https://platform.openai.com/docs/api-reference/introduction
Parameters:
model_name : str model name as shown in the vLLM server api_key : str the API key for the vLLM server. base_url : str the base url for the vLLM server. config : LLMConfig the LLM configuration.
Source code in package/llm-ie/src/llm_ie/engines.py
chat
chat(
messages: List[Dict[str, str]],
verbose: bool = False,
stream: bool = False,
messages_logger: MessagesLogger = None,
) -> Union[
Dict[str, str], Generator[Dict[str, str], None, None]
]
This method inputs chat messages and outputs LLM generated text.
Parameters:
messages : List[Dict[str,str]] a list of dict with role and content. role must be one of {"system", "user", "assistant"} verbose : bool, Optional if True, VLM generated text will be printed in terminal in real-time. stream : bool, Optional if True, returns a generator that yields the output in real-time. messages_logger : MessagesLogger, Optional the message logger that logs the chat messages.
Returns:
response : Union[Dict[str,str], Generator[Dict[str, str], None, None]]
a dict {"reasoning":
Source code in package/llm-ie/src/llm_ie/engines.py
888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 | |
chat_async
async
chat_async(
messages: List[Dict[str, str]],
messages_logger: MessagesLogger = None,
) -> Dict[str, str]
Async version of chat method. Streaming is not supported.
Source code in package/llm-ie/src/llm_ie/engines.py
llm_ie.engines.VLLMInferenceEngine
VLLMInferenceEngine(
model: str,
api_key: str = "",
base_url: str = "http://localhost:8000/v1",
config: LLMConfig = None,
**kwrs
)
Bases: OpenAICompatibleInferenceEngine
vLLM OpenAI compatible server inference engine. https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html
For parameters and documentation, refer to https://platform.openai.com/docs/api-reference/introduction
Parameters:
model_name : str model name as shown in the vLLM server api_key : str, Optional the API key for the vLLM server. base_url : str, Optional the base url for the vLLM server. config : LLMConfig the LLM configuration.
Source code in package/llm-ie/src/llm_ie/engines.py
llm_ie.engines.SGLangInferenceEngine
SGLangInferenceEngine(
model: str,
api_key: str = "",
base_url: str = "http://localhost:30000/v1",
config: LLMConfig = None,
**kwrs
)
Bases: OpenAICompatibleInferenceEngine
SGLang OpenAI compatible API inference engine. https://docs.sglang.ai/basic_usage/openai_api.html
Parameters:
model_name : str model name as shown in the vLLM server api_key : str, Optional the API key for the vLLM server. base_url : str, Optional the base url for the vLLM server. config : LLMConfig the LLM configuration.
Source code in package/llm-ie/src/llm_ie/engines.py
llm_ie.engines.OpenRouterInferenceEngine
OpenRouterInferenceEngine(
model: str,
api_key: str = None,
base_url: str = "https://openrouter.ai/api/v1",
config: LLMConfig = None,
**kwrs
)
Bases: OpenAICompatibleInferenceEngine
OpenRouter OpenAI-compatible server inference engine.
Parameters:
model_name : str model name as shown in the vLLM server api_key : str, Optional the API key for the vLLM server. If None, will use the key in os.environ['OPENROUTER_API_KEY']. base_url : str, Optional the base url for the vLLM server. config : LLMConfig the LLM configuration.
Source code in package/llm-ie/src/llm_ie/engines.py
llm_ie.engines.OpenAIInferenceEngine
Bases: InferenceEngine
The OpenAI API inference engine. For parameters and documentation, refer to https://platform.openai.com/docs/api-reference/introduction
Parameters:
model_name : str model name as described in https://platform.openai.com/docs/models
Source code in package/llm-ie/src/llm_ie/engines.py
chat
chat(
messages: List[Dict[str, str]],
verbose: bool = False,
stream: bool = False,
messages_logger: MessagesLogger = None,
) -> Union[
Dict[str, str], Generator[Dict[str, str], None, None]
]
This method inputs chat messages and outputs LLM generated text.
Parameters:
messages : List[Dict[str,str]] a list of dict with role and content. role must be one of {"system", "user", "assistant"} verbose : bool, Optional if True, VLM generated text will be printed in terminal in real-time. stream : bool, Optional if True, returns a generator that yields the output in real-time. messages_logger : MessagesLogger, Optional the message logger that logs the chat messages.
Returns:
response : Union[Dict[str,str], Generator[Dict[str, str], None, None]]
a dict {"reasoning":
Source code in package/llm-ie/src/llm_ie/engines.py
1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 | |
chat_async
async
chat_async(
messages: List[Dict[str, str]],
messages_logger: MessagesLogger = None,
) -> Dict[str, str]
Async version of chat method. Streaming is not supported.
Source code in package/llm-ie/src/llm_ie/engines.py
llm_ie.engines.AzureOpenAIInferenceEngine
Bases: OpenAIInferenceEngine
The Azure OpenAI API inference engine. For parameters and documentation, refer to - https://azure.microsoft.com/en-us/products/ai-services/openai-service - https://learn.microsoft.com/en-us/azure/ai-services/openai/quickstart
Parameters:
model : str model name as described in https://platform.openai.com/docs/models api_version : str the Azure OpenAI API version config : LLMConfig the LLM configuration.
Source code in package/llm-ie/src/llm_ie/engines.py
llm_ie.engines.HuggingFaceHubInferenceEngine
HuggingFaceHubInferenceEngine(
model: str = None,
token: Union[str, bool] = None,
base_url: str = None,
api_key: str = None,
config: LLMConfig = None,
**kwrs
)
Bases: InferenceEngine
The Huggingface_hub InferenceClient inference engine. For parameters and documentation, refer to https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_client
Parameters:
model : str the model name exactly as shown in Huggingface repo token : str, Optional the Huggingface token. If None, will use the token in os.environ['HF_TOKEN']. base_url : str, Optional the base url for the LLM server. If None, will use the default Huggingface Hub URL. api_key : str, Optional the API key for the LLM server. config : LLMConfig the LLM configuration.
Source code in package/llm-ie/src/llm_ie/engines.py
chat
chat(
messages: List[Dict[str, str]],
verbose: bool = False,
stream: bool = False,
messages_logger: MessagesLogger = None,
) -> Union[
Dict[str, str], Generator[Dict[str, str], None, None]
]
This method inputs chat messages and outputs LLM generated text.
Parameters:
messages : List[Dict[str,str]] a list of dict with role and content. role must be one of {"system", "user", "assistant"} verbose : bool, Optional if True, VLM generated text will be printed in terminal in real-time. stream : bool, Optional if True, returns a generator that yields the output in real-time. messages_logger : MessagesLogger, Optional the message logger that logs the chat messages.
Returns:
response : Union[Dict[str,str], Generator[Dict[str, str], None, None]]
a dict {"reasoning":
Source code in package/llm-ie/src/llm_ie/engines.py
725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 | |
chat_async
async
chat_async(
messages: List[Dict[str, str]],
messages_logger: MessagesLogger = None,
) -> Dict[str, str]
Async version of chat method. Streaming is not supported.
Source code in package/llm-ie/src/llm_ie/engines.py
llm_ie.engines.LiteLLMInferenceEngine
LiteLLMInferenceEngine(
model: str = None,
base_url: str = None,
api_key: str = None,
config: LLMConfig = None,
)
Bases: InferenceEngine
The LiteLLM inference engine. For parameters and documentation, refer to https://github.com/BerriAI/litellm?tab=readme-ov-file
Parameters:
model : str the model name base_url : str, Optional the base url for the LLM server api_key : str, Optional the API key for the LLM server config : LLMConfig the LLM configuration.
Source code in package/llm-ie/src/llm_ie/engines.py
chat
chat(
messages: List[Dict[str, str]],
verbose: bool = False,
stream: bool = False,
messages_logger: MessagesLogger = None,
) -> Union[
Dict[str, str], Generator[Dict[str, str], None, None]
]
This method inputs chat messages and outputs LLM generated text.
Parameters:
messages : List[Dict[str,str]] a list of dict with role and content. role must be one of {"system", "user", "assistant"} verbose : bool, Optional if True, VLM generated text will be printed in terminal in real-time. stream : bool, Optional if True, returns a generator that yields the output in real-time. messages_logger: MessagesLogger, Optional a messages logger that logs the messages.
Returns:
response : Union[Dict[str,str], Generator[Dict[str, str], None, None]]
a dict {"reasoning":
Source code in package/llm-ie/src/llm_ie/engines.py
1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 | |
chat_async
async
chat_async(
messages: List[Dict[str, str]],
messages_logger: MessagesLogger = None,
) -> Dict[str, str]
Async version of chat method. Streaming is not supported.