-
Notifications
You must be signed in to change notification settings - Fork 4
Large Language Models
The FANTASIA components to query Large Language Models (LLMs) share the same set of basic functions and data types to make it easy to switch between one another.
This enum type defines the set of roles that can be used in a GPT based query. Values are:
- SYSTEM: indicates the system prompt;
- ASSISTANT: indicates the machine chat role;
- USER: indicates the user;
- FUNCTION: indicates a request for a call to a function.
This structure defines a chat turn. Its fields are:
- Role: a ChatGPTRoleTypeEnum indicating the role for the described turn;
- Content: a string representing the turn content.
This function queries the LLM with the provided chat history, prompt and model.
Input:
- Messages: a list of chat turns preceding the current one;
- Model: a string representing the model to be queried.
This event fires when the LLM has provided its answer.
Data:
- ChatGPT Response: the answer provided by the LLM;
- Role: a string representing the role specified by the LLM to interpret its answer.
The Ollama component allows to connect to an Ollama instance, allowing you to host your own LLM server running open source models.
The configuration section of the component has the following fields:
- Endpoint: the address of the server running Ollama (e.g. "localhost")
- Port: the port Ollama is listening to (Default value: 11434)
The Groq component provides access to the accelerated inference service provided by Groq for most of the open source models available.
The configuration section of the component has the following fields:
- Key: the secret key of your Groq subscription
The OpenAI component provides access to the models provided by OpenaI. Check out the video for an example.
The configuration section of the component has the following fields:
- Key: the secret key of your OpenAI subscription