Prompt Edirot API
llm_ie.prompt_editor.PromptEditor
PromptEditor(
inference_engine: InferenceEngine,
extractor: FrameExtractor,
prompt_guide: str = None,
)
This class is a LLM agent that rewrite or comment a prompt draft based on the prompt guide of an extractor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inference_engine
|
InferenceEngine
|
the LLM inferencing engine object. Must implements the chat() method. |
required |
extractor
|
FrameExtractor
|
a FrameExtractor. |
required |
prompt_guide
|
str
|
the prompt guide for the extractor. All built-in extractors have a prompt guide in the asset folder. Passing values to this parameter will override the built-in prompt guide which is not recommended. For custom extractors, this parameter must be provided. |
None
|
Source code in package/llm-ie/src/llm_ie/prompt_editor.py
rewrite
This method inputs a prompt draft and rewrites it following the extractor's guideline.
Source code in package/llm-ie/src/llm_ie/prompt_editor.py
comment
This method inputs a prompt draft and comment following the extractor's guideline.
Source code in package/llm-ie/src/llm_ie/prompt_editor.py
chat
External method that detects the environment and calls the appropriate chat method.
chat_stream
This method processes messages and yields response chunks from the inference engine. This is for frontend App.
Parameters:
messages : List[Dict[str, str]] List of message dictionaries (e.g., [{"role": "user", "content": "Hi"}]).
Yields:
Chunks of the assistant's response.