Skip to content

Commit

Permalink
New web quickstart (#62)
Browse files Browse the repository at this point in the history
* Checking in

* Checking in
  • Loading branch information
jordancde authored Apr 20, 2024
1 parent b16bfa6 commit 6a5c3e2
Show file tree
Hide file tree
Showing 3 changed files with 47 additions and 176 deletions.
208 changes: 43 additions & 165 deletions quickstart/web.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: "Web"
sidebarTitle: "Web"
description: "Quickstart with Vapi on the Web."
description: "Get started with Vapi on the Web."
---

import InstallWebSDK from "/snippets/sdks/web/install-web-sdk.mdx";
Expand All @@ -16,62 +16,36 @@ import { YouTubeEmbed } from "/snippets/video/videos.mdx";
altTitle="Quickstart: Web"
/>

**Anywhere you can run clientside JavaScript, you can run Vapi.** All the way from file-by-file vanilla JavaScript, to writing complex component-based applications (w/ React + Next.js).
Anywhere you can run client-side JavaScript, you can run Vapi. All the way from vanilla to complex component-based applications with React and Next.js.

<AccordionGroup>
<NoCodePrerequisitesAccordion />
<Accordion title="Follow-Along Live Demo" icon="browsers" iconType="regular">
<CardGroup cols={1}>
<Card
title="Skim Our Live React Demo (as you read)"
icon="arrow-up-right-from-square"
iconType="regular"
color="#f25130"
href={quickstartDemoLink}
>
As you follow this guide you can interact with a live demo to deepen your understanding.
</Card>
</CardGroup>
</Accordion>
</AccordionGroup>

## Vapi’s Pizzeria

We will be implementing a simple order-taking assistant at a pizza shop called “Vapi’s Pizzeria”.

Vapi’s has 3 types of menu items: `pizza`, `side`s, & `drink`s. Customers will be ordering 1 of each.

<Frame caption="Customers will order 3 items: 1 pizza, 1 side, & 1 drink. The assistant will handle the full order taking conversation.">
<img src="/static/images/quickstart/vapis-pizzeria.png" />
</Frame>

### TODOs

There are 2 ways we can do this:

1. **Inline call:** we can start a web call in 1 function call w/ the Web SDK (passing [assistant](/assistants) configuration inline)
2. **Create an assistant, then call:** we can create a [persistent assistant](/persistent_assistants), then start a web call (with the assistant's ID)
<CardGroup cols={2}>
<Card title="The Web SDK" icon="window" iconType="duotone" href="/sdk/web">
Explore the full Vapi Web SDK.
</Card>
<Card
title="Live React Demo"
icon="arrow-up-right-from-square"
iconType="regular"
color="#f25130"
href={quickstartDemoLink}
>
Follow along as you read.
</Card>
</CardGroup>

In both cases, we will be using the [Web SDK](/sdk/web) to start a call.
## Installation

## Starting a Web Call Inline (fastest)
<InstallWebSDK />

First, we will look at starting a call inline, passing the assistant configuration directly to the Web SDKs `.start()` method.
<ImportWebSDK />

### Setup
## Starting a Call

<AccordionGroup>
<Accordion title="Install the Web SDK" icon="folder-arrow-down" iconType="solid">
<InstallWebSDK />
</Accordion>
<Accordion title="Import the Vapi Module" icon="brackets-curly" iconType="solid">
<ImportWebSDK />
</Accordion>
</AccordionGroup>
Assistants can either be created on the fly (temporary) or created & persisted to your account (persistent).

### Making the Call
### Option 1: Temporary Assistant

Now that we have the SDK code imported, we just have to call `.start()`, passing in our assistant configuration to begin the call.
If you want to customize properties from the frontend on the fly, you can create an assistant configuration object and pass it to the `.start()` method.

<AccordionGroup>
<Accordion title="Assistant Configuration" icon="brackets-curly" iconType="solid">
Expand Down Expand Up @@ -137,137 +111,41 @@ Now that we have the SDK code imported, we just have to call `.start()`, passing
},
};
```
Let's break down the configuration options we passed:

</Accordion>
<Accordion title="Start the Call" icon="play" iconType="solid">
And here is how we start the call:

```javascript
vapi.start(assistantOptions);
````

<Accordion title="Allowing Microphone Access" icon="microphone" iconType="solid">
The SDK will request access to your system's microphone, make sure you grant it.
<Frame caption="Your browser may ask you for microphone permissions, make sure you grant them to the application.">
<img src="/static/images/quickstart/web/microphone-permissions.png" />
</Frame>
</Accordion>
</Accordion>
</AccordionGroup>
#### Breakdown
When we call `.start()` and pass an assistant configuration inline above, Vapi will do **2 things**:
1. **create an ephemeral assistant:** Vapi will conduct the call with an assistant as we configured (but [the assistant will not persist](/persistent_assistants) to your account)
2. **start a web call (w/ [Daily](https://www.daily.co))**: the SDK will start a realtime web call with bi-directional audio streaming
Let's look at each option we passed individually:

<AccordionGroup>
<Accordion title="name" icon="signature" iconType="solid">
This is a display name for the assistant in our dashboard (for internal purposes only). This is an optional field.

</Accordion>
<Accordion title="firstMessage" icon="message" iconType="light">
This is the first message that our assistant will say when it picks up the web call.
</Accordion>
<Accordion title="transcriber" icon="microphone" iconType="solid">
The **transcriber** is what turns user speech into processable text for our LLM. This is the first step in the end-to-end voice pipeline.

We are using [Deepgram](https://deepgram.com) for transcription, specifically, their `Nova 2` model. We also set the language to be transcribed as English.
- **name:** the display name for the assistant in our dashboard (for internal purposes only)
- **firstMessage:** the first message that our assistant will say when it picks up the web call
- **transcriber:** the transcriber is what turns user speech into processable text for our LLM. This is the first step in the end-to-end voice pipeline. We are using Deepgram for transcription, specifically, their `Nova 2` model. We also set the language to be transcribed as English.
- **voice:** the final portion of the voice pipeline is turning LLM output-text into speech. This process is called "Text-to-speech" (or TTS for short). We use a voice provider called PlayHT, & a voice provided by them called `jennifer`.
- **model:** for our LLM, we use `gpt-4` (from OpenAI) & set our system prompt for the assistant. The system prompt configures the context, role, personality, instructions and so on for the assistant. In our case, the system prompt above will give us the behaviour we want.

</Accordion>
<Accordion title="voice" icon="person" iconType="solid">
The final portion of the voice pipeline is turning LLM output-text into speech. This process is called "Text-to-speech" (or TTS for short).

We use a voice provider called [PlayHT](https://play.ht), & a voice provided by them called `jennifer`.

</Accordion>
<Accordion title="model" icon="microchip" iconType="solid">
For our LLM, we use `gpt-4` (from [OpenAI](https://openai.com/)) & set our system prompt for the assistant.

The system prompt configures the context, role, personality, instructions and so on for the assistant.

In our case, the system prompt above will give us the behaviour we want.

</Accordion>
</Accordion>
</AccordionGroup>

To discover additional fields you can pass at call creation, see our [create assistant](/api-reference/assistants/create-assistant) API.
Now we can call `.start()`, passing the temporary assistant configuration ID:

<CardGroup cols={2}>
<Card
title="Try Our Live Demo"
icon="arrow-up-right-from-square"
iconType="regular"
color="#f25130"
href={quickstartDemoLink}
>
Try eveything you've learned so far with real code in our live demo.
</Card>
</CardGroup>
## Creating An Assistant Separately
Now that we know how to start an assistant call inline, lets look at separating the process of assistant creation & calling into **2 steps**.
Let's look at how we can create the same assistant "by hand" in the Vapi dashboard, then call it separately.
```javascript
vapi.start(assistantOptions);
```

<Tip>
Remember that we can also programmatically [create an
assistant](/api-reference/assistants/create-assistant) via the API.
</Tip>
More configuration options can be found in the [Assistant](/api-reference/assistants/create-assistant) API reference.

### Assistant Creation
### Option 2: Persistant Assistant

We will perform the same configuration of our assistant's **transcriber**, **model**, & **voice**.
If you want to create an assistant that you can reuse across multiple calls, you can create a persistent assistant in the [Vapi Dashboard](https://dashboard.vapi.ai). Here's how you can do that:

<AssistantSetupInboundAccordionGroup />

### Calling Our Premade Assistant
To customize additional fields, this can be done via the [Assistant](/api-reference/assistants/create-assistant) API instead.

Now that we have created our assistant, we just need to:
Then, you can copy the assistant's ID at the top of the assistant detail page:

1. **copy the assistant ID** from the dashboard
2. **start a web call** with the assistant ID
<Accordion title="Copy Your Assistant ID" icon="fingerprint" iconType="solid">
You can find your assistant's ID at the top of the assistant detail page:

<Frame caption="Your assistant's identifier will look something like this.">
<img src="/static/images/quickstart/assistant-id-dashboard.png" />
</Frame>
</Accordion>
<Frame>
<img src="/static/images/quickstart/assistant-id-dashboard.png" />
</Frame>

Now we can call `.start()`, passing the pre-made assistant's ID:
Now we can call `.start()`, passing the persistant assistant's ID:

```javascript
vapi.start("79f3XXXX-XXXX-XXXX-XXXX-XXXXXXXXce48");
```
This will have the same effect as what we did inline before, we just broke up the steps. Happy ordering!
<Tip>
Your assistant won't yet be able to hang-up at the end of the call. We will learn more about
configuring call end behaviour in later guides.
</Tip>

## Further Reading

<CardGroup cols={2}>
<Card title="The Web SDK" icon="window" iconType="duotone" href="/sdk/web">
Explore the full Vapi Web SDK.
</Card>
<Card
title="Live Demo"
icon="arrow-up-right-from-square"
iconType="regular"
color="#f25130"
href={quickstartDemoLink}
>
Try eveything you've learned so far with real code in our live demo.
</Card>
</CardGroup>
9 changes: 3 additions & 6 deletions snippets/sdks/web/import-web-sdk.mdx
Original file line number Diff line number Diff line change
@@ -1,16 +1,13 @@
You can import the Vapi class from the package:
Import the package:

```javascript
import Vapi from "@vapi-ai/web";
```

Then, create a new instance of the Vapi class, passing your `Public Key` as a parameter to the constructor:
Then, create a new instance of the Vapi class, passing your **Public Key** as a parameter to the constructor:

```javascript
const vapi = new Vapi("your-public-key");
```

<Info>
You can find your public key in your Vapi dashboard at
[dashboard.vapi.ai/account](https://dashboard.vapi.ai/account)
</Info>
You can find your public key in the [Vapi Dashboard](https://dashboard.vapi.ai/account).
6 changes: 1 addition & 5 deletions snippets/sdks/web/install-web-sdk.mdx
Original file line number Diff line number Diff line change
@@ -1,11 +1,7 @@
You can install the package w/ [yarn](https://yarnpkg.com):
Install the package:

```bash
yarn add @vapi-ai/web
```

or w/ [npm](https://www.npmjs.com):

```bash
npm install @vapi-ai/web
```

0 comments on commit 6a5c3e2

Please sign in to comment.