Skip to content

LLM Explorer

noco-ai edited this page Feb 16, 2024 · 15 revisions

Chat Sandbox

The Chat UI in LLM explorer offers an experience similar to the OpenAI chat sandbox for writing zero and multi-shot prompts for use in agents. image

Generation Settings / System Message

image

  • System Message: Defines the system message to send to the LLM model(s). This is formatted and prepended to the input/output examples.
  • Top K: Adjust the top K parameter (default is 50).
  • Top P: Adjust the top P parameter (default is 0.9).
  • Min P: Adjust the minimum P parameter (default is 0.05).
  • Temperature: Adjust the temperature setting (default is 1).
  • Mirostat: Turn the mirostat setting on or off (default is off).
  • Seed: Set the seed for generation. Default is -1 (random).

Features

image

  • Mult-shot examples #1 The plus and minus button in the lower left-hand corner allow for the addition and removal of input/output example pairs to feed into the model(s).

  • Completion tokens #2 Max number of completion tokens

  • Number of Generations #3 Number of generations per model

  • Models Selections #4 Multi-select list of models to test

  • Copy to Clipboard #5 Copy the output to the clipboard.

Exclude Example / Copy to Input

When this checkbox is selected the input/output example is not sent to the model. This combined with the Copy to Output allows for saving validation examples for testing new models.

Token per Second and Count Totals

Each model that creates a generation will output how fast it did so and it's input and completion token count.

Clone this wiki locally