Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic prompting docs #548

Open
wants to merge 1 commit into
base: canary
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,10 @@
"name": "Views",
"url": "views"
},
{
"name": "Prompts",
"url": "prompts"
},
{
"name": "React",
"url": "react"
Expand Down Expand Up @@ -150,6 +154,38 @@
"queries/advanced/materialized-queries"
]
},
{
"group": "Basics",
"pages": [
"prompts/basics/introduction",
"prompts/basics/model-configuration",
"prompts/basics/api-reference"
]
},
{
"group": "Logic",
"pages": [
"prompts/logic/variables",
"prompts/logic/parameters",
"prompts/logic/if-else",
"prompts/logic/loops"
]
},
{
"group": "Advanced",
"pages": [
"prompts/advanced/config",
"prompts/advanced/run-query",
"prompts/advanced/reference-prompts",
"prompts/advanced/cast-methods"
]
},
{
"group": "Models",
"pages": [
"prompts/models/openai"
]
},
{
"group": "Basics",
"pages": [
Expand Down
49 changes: 49 additions & 0 deletions docs/prompts/advanced/cast-methods.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
title: 'Cast Methods'
description: 'Converts data types in Latitude logic blocks, ensuring that SQL operations are performed with the correct data type.'
---

## Introduction

The `cast` method allows you to convert values to a different type in your prompt logic enclosed in `{ }`. Cast ensures that the data types of variables are suitable for the operations you wish to perform.

## Example

Consider a scenario where you have a parameter named `limit`, but the user has provided a value as a string, `"3"`, instead of a numeric value. To correctly perform a comparison operation, you need the value to be of a numeric type.

Without casting, the comparison in the following block would not work as expected:

```
{#if limit > 5}
...
{:else}
...
{/if}
```


This is because, with `limit = "3"`, you're comparing the string `"3"` to the number `5`, which leads to an incorrect comparison.

### Solution: Using `cast`

To resolve this, you can use the `cast` function to convert `limit` to an integer:

```
{#if cast(limit, "int") > 5}
...
{:else}
...
{/if}
```

This way, if `limit` is `"3"`, it gets converted to the numeric value `3` before the comparison, ensuring the operation is correctly performed.

## Accepted Casting Types

You can cast values to various types using the `cast` method. The following are the accepted types for casting:

- `string` or `text`: Converts the value to a string. Both `string` and `text` perform the same function.
- `int`: Converts the value to an integer.
- `float` or `number`: Converts the value to a floating-point number. Both `float` and `number` are treated similarly.
- `bool` or `boolean`: Converts the value to a boolean. Both `bool` and `boolean` are interchangeable and perform the same function.
- `date`: Converts the value to a date.
30 changes: 30 additions & 0 deletions docs/prompts/advanced/config.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
title: 'Prompt configuration'
description: 'Configure how your Language Model generates each prompt'
---
import PromptConfigDocs from '/snippets/prompts/config.mdx'

In addition to your own source's behaviour, you can configure how Latitude will handle the data in each specific query. This can be done by adding a `config` tag to the query itself.
This keyword will not be included in the query that gets sent to your source, but it defines how Latitude will handle the data.

## Syntax

A `config` tag is defined as a key-value object in the query. Here's an example:
```sql
{@config model = 'gpt-3.5-turbo'} /* Sets the model to use */
{@config json = true} /* Sets the response to be a JSON object */

Create a JSON object with a list of users and their corresponding emails.
```

<Note>
Any `@config` value must be defined using a literal value, without any variables or expressions. Using variables or expressions will result in a syntax error.
</Note>

## Default configuration

If you want to set a default configuration for all the prompts in a model, you can do so by adding a `config` object in the model schema. Go to [Model configuration](/prompts/basics/model-configuration) to learn more about the configuration file.

## Available options

<PromptConfigDocs />
41 changes: 41 additions & 0 deletions docs/prompts/advanced/reference-prompts.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
title: 'Reference other prompts'
description: 'Learn how to include code from another prompt'
---

## Introduction

The Reference Prompts feature allows you to include code from another file in your current prompt. This is useful for reusing common parts of a prompt, ensuring that your instructions are consistent and easy to maintain.

## How It Works

To reference code from another prompt, use the syntax `{ref ('other_prompt')}`, where `other_prompt` is the name of another `.prompt` file located in the `/prompts` folder. This syntax compiles the specified prompt and imports the instructions to the location where it is added.

### Referencing prompts as a source

An important use case for referencing other prompts is creating specific instructions for multiple prompts. You can do this as follows:

```jsx prompts/data/data_engineer_description.prompt
You are a data engineer, an expert on data analysis and visualization.
```

```jsx prompts/data/analysis.prompt
{ref('./data_engineer_description')}

Describe the results of the following table:
{table(runQuery('data'))}
```

Which will translate to:

```md prompts/data/analysis.prompt
You are a data engineer, an expert on data analysis and visualization.

Describe the results of the following table:

| id | name | age | city |
|----|------|-----|------|
| 1 | John | 25 | New York |
| 2 | Jane | 30 | Los Angeles |
| 3 | Bob | 35 | Chicago |
```
70 changes: 70 additions & 0 deletions docs/prompts/advanced/run-query.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
---
title: 'Run Query'
description: 'Analyze the results of a query and generate instructions based on the results.'
---

## Introduction

The `runQuery` function allows you to dynamically create instructions based on the results from a query. This functionality is particularly useful when you need to analyze data, or generate different instructions based on a value from your database.

## Syntax

The `runQuery` function must take a string as an argument, which represents the path to the query file.

Additionally, you can pass a JSON object as a second argument to the function, to specify the parameters required by the query.

```jsx
{ user_id = param('user_id') }
{ results = runQuery('user_actions', { user_id: user_id }) }
```

The returned value will always be an array of objects, where each object represents a row in the query result, and the keys of the object represent the column names.

```json
[
{
"id": 302,
"user_id": 1,
"action": "signup",
"date": "2023-01-01"
},
{
"id": 527,
"user_id": 1,
"action": "purchase",
"date": "2023-01-02"
},
{
"id": 1091,
"user_id": 2,
"action": "purchase",
"date": "2023-01-03"
}
]
```

## Interpolating to the prompt

Depending on your needs, you may want to print the results to the resulting prompt in a specific format. To do this, you can simply use the table, json or csv functions to automatically format the results.

However, if you want to print them in a different format, you will need to iterate over the results and print them manually. Read more about [Loops](/prompts/logic/loops) to learn how to do this.

<CodeGroup>
```jsx Raw prompt
{user_id = param('user_id')}
{results = runQuery('user_actions', { user_id: user_id })}

Based on the following actions from user {user_id}, generate a report:

{#each results as row}
- {row.date}: {row.action}
{/each}
```
```md Compiled prompt
Based on the following actions from user 1, generate a report:

- 2023-01-01: signup
- 2023-01-02: purchase
- 2023-01-03: purchase
```
</CodeGroup>
34 changes: 34 additions & 0 deletions docs/prompts/basics/api-reference.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
title: 'API Reference'
description: 'How your prompt endpoints are exposed'
---

## Introduction

For each of your prompts, Latitude automatially generates a REST API at `/api/prompt/<prompt-name>`.

Accessing to this endpoint will return as a plain text the response of your prompt.

## Parameters

In your prompt, you can use dynamic parameters, like it is explained in the [Parameters Section](/prompts/logic/parameters). To pass these parameters to your API, you can simply add them as query parameters in the URL.

For example, if you have a prompt named `joke` with the following content:

```prompt
Tell me a joke about { param('topic') }
```

You can access to this prompt at `/api/prompt/joke?topic=cats` and it will return a joke about cats.

## Streaming

If you want to stream the response of your prompt instead of waiting for it to be generated, you can add the internal `__stream` parameter to your URL and set it to `true`. This will make Latitude return a [SSE stream](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) instead of a plain text response.

For example, if you have a prompt named `joke` with the following content:

```prompt
Tell me a joke about { param('topic') }
```

You can access to this prompt at `/api/prompt/joke?topic=cats&__stream=true` and it will return a SSE stream of the response of your prompt.
71 changes: 71 additions & 0 deletions docs/prompts/basics/introduction.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
---
title: 'Introduction'
description: 'Learn the basics on how to prompt your data'
---

## Introduction

Latitude Prompts is a powerful tool that allows you to easily generate AI insights from your data. It is a versatile tool that can be used for a variety of purposes, such as data analysis, data visualization, and data augmentation.

Similar to how Latitude Queries work, Prompts are automatically exposed as endpoints to your Latitude API! This means that you can use them in your frontend, in your backend, or even in your database. Also, thanks to our dynamic engine, you can easily integrate your data and even user inputs into your prompts.

## Setup

To use Latitude Prompts, just follow these steps:

<Steps>
<Step title="Create a Latitude project">
[Use the CLI](/guides/examples/basic-example#1-create-a-new-data-app) to create a new project.
</Step>
<Step title="Create a prompts folder">
Add a folder called `prompts` to your project's root. This is where you will store your prompts.
</Step>
<Step title="Configure a model">
Create a `.yaml` file with your Model configuration. This file will let Latitude know how to connect to your Language Model and how to generate responses.

Read [Model Configuration](/prompts/basics/model-configuration) to learn more about how to configure your model.
</Step>
<Step title="Write your prompt">
Inside the `prompts` folder, create a `.prompt` file with your prompt.
</Step>
</Steps>

## Prompt syntax

Regular prompts are written in plain text. Just write your prompt as you would write it in a regular text editor, and your Language Model will generate a response based on the prompt.

However, you can also use special syntax to make your prompts more powerful and dynamic. This syntax allows you to reference data from your database, pass parameters to your Language Model, and more.

Read more about the prompt syntax in the [Logic Section](/prompts/logic/variables) section.

## Run your prompt

Once you have configured your model and prompt, you can run your prompt by using the CLI or the API.

### CLI

To run your prompt using the CLI, use the `prompt` command:

```bash
latitude prompt <prompt-name>
```

This will run your prompt and print the response to the console as it is generated. The prompt name is defined by the relative path of the `.prompt` file from the `prompts` folder.

To add parameters from the CLI, use the `--param` flag. You can add it multiple times to pass multiple parameters.

```bash
latitude prompt table_insights --param limit=10 --param user_name="John Doe"
````

For debugging purposes, you can use the `--debug` flag to print the final prompt that will be sent to your Language Model. This can be useful for debugging and understanding how your prompt is being generated.

```bash
latitude prompt table_insights --debug
```

### API

You can also run your prompt using the API. To do this, you can use the `/api/prompt/<prompt-name>` endpoint.

Read more about the API in the [API Reference](/prompts/basics/api-reference) section.
42 changes: 42 additions & 0 deletions docs/prompts/basics/model-configuration.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
title: 'Model Configuration'
description: 'Learn how to configure your language model'
---
import PromptConfigDocs from '/snippets/prompts/config.mdx'

## Introduction

Language Models are the engines that generate responses from your prompts. You can configure your Language Model by creating a `.yaml` file in your project's `prompts` folder.

## Structure

You can configure multiple Language Models in your project, each with their own configuration. To do this, you can create multiple `.yaml` in different subfolders. Prompts will automatically use the configuration file closest to the prompt file.

<Warning>
You can not add two Model configuration files in the same folder.
</Warning>

## Configuration

The configuration file is a YAML file that contains the following fields:

- `type`: The type of your Language Model. Check out the [Models Section](/prompts/models/openai) to learn more about the available models.
- `details`: Configuration details for your Language Model. This field is specific to each type of Language Model.
- `config`: A configuration object that defines how your Language Models will generate responses. Although all model types have the same properties, each type will have different default values.

### Config options

The `config` field can contain any of the following properties:

<PromptConfigDocs />

#### Example:

```yaml
type: openai
config:
model: gpt-3.5-turbo
temperature: 0.2 # Lower temperature values are better if consistency is important
json: true
```

Loading
Loading