Prompt Edirot API
llm_ie.prompt_editor.PromptEditor
PromptEditor(
inference_engine: InferenceEngine,
extractor: FrameExtractor,
prompt_guide: str = None,
)
This class is a LLM agent that rewrite or comment a prompt draft based on the prompt guide of an extractor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inference_engine
|
InferenceEngine
|
the LLM inferencing engine object. Must implements the chat() method. |
required |
extractor
|
FrameExtractor
|
a FrameExtractor. |
required |
prompt_guide
|
str
|
the prompt guide for the extractor. All built-in extractors have a prompt guide in the asset folder. Passing values to this parameter will override the built-in prompt guide which is not recommended. For custom extractors, this parameter must be provided. |
None
|
Source code in package/llm-ie/src/llm_ie/prompt_editor.py
rewrite
This method inputs a prompt draft and rewrites it following the extractor's guideline. This method is stateless.
Source code in package/llm-ie/src/llm_ie/prompt_editor.py
comment
This method inputs a prompt draft and comment following the extractor's guideline. This method is stateless.
Source code in package/llm-ie/src/llm_ie/prompt_editor.py
clear_messages
export_chat
Exports the current chat history to a JSON file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_path
|
str
|
path to the file where the chat history will be saved. Should have a .json extension. |
required |
Source code in package/llm-ie/src/llm_ie/prompt_editor.py
import_chat
Imports a chat history from a JSON file, overwriting the current history.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_path
|
str
|
The path to the .json file containing the chat history. |
required |
Source code in package/llm-ie/src/llm_ie/prompt_editor.py
chat
External method that detects the environment and calls the appropriate chat method.
This method use and updates the messages
list (internal memory).
This method is stateful.
Source code in package/llm-ie/src/llm_ie/prompt_editor.py
chat_stream
This method processes messages and yields response chunks from the inference engine. This is for frontend App. This method is stateless.
Parameters:
messages : List[Dict[str, str]] List of message dictionaries (e.g., [{"role": "user", "content": "Hi"}]).
Yields:
Chunks of the assistant's response.