From 47d030fe95e57acfcc60b3a3c740ea009f0cb6fa Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Mon, 16 Sep 2024 12:06:19 +0200 Subject: [PATCH 01/30] chore: add documentation for OpenAI client --- README.md | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 73 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index c2d2a66d..ef03e7ae 100644 --- a/README.md +++ b/README.md @@ -46,7 +46,8 @@ $ npm install @sap-ai-sdk/ai-api This package incorporates generative AI foundation models into your AI activities in SAP AI Core and SAP AI Launchpad. -To install the Gen AI Hub package in your project, run: +To install the Gen AI Hub package in your project, run: + ``` $ npm install @sap-ai-sdk/foundation-models ``` @@ -55,7 +56,8 @@ $ npm install @sap-ai-sdk/foundation-models This package incorporates generative AI orchestration capabilities into your AI activities in SAP AI Core and SAP AI Launchpad. -To install the Gen AI Hub package in your project, run: +To install the Gen AI Hub package in your project, run: + ``` $ npm install @sap-ai-sdk/orchestration ``` @@ -64,10 +66,78 @@ $ npm install @sap-ai-sdk/orchestration We have created a sample project demonstrating the different clients' usage of the SAP Cloud SDK for AI for TypeScript/JavaScript. The [project README](./sample-code/README.md) outlines the set-up needed to build and run it locally. +### OpenAI client + +The OpenAI client can be used to send chat completion or embedding requests to the OpenAI model deployed in SAP Generative AI Hub. + +#### Prerequisites + +- A deployed OpenAI model in AI Core. + - [How to deploy a model to AI Core](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-deployment-for-generative-ai-model-in-sap-ai-core) + -
An example deployed model from the AI Core /deployments endpoint +
+        {
+        "id": "d123456abcdefg",
+        "deploymentUrl": "https://api.ai.region.aws.ml.hana.ondemand.com/v2/inference/deployments/d123456abcdefg",
+        "configurationId": "12345-123-123-123-123456abcdefg",
+        "configurationName": "gpt-35-turbo",
+        "scenarioId": "foundation-models",
+        "status": "RUNNING",
+        "statusMessage": null,
+        "targetStatus": "RUNNING",
+        "lastOperation": "CREATE",
+        "latestRunningConfigurationId": "12345-123-123-123-123456abcdefg",
+        "ttl": null,
+        "details": {
+          "scaling": {
+            "backendDetails": null,
+            "backend_details": {
+            }
+          },
+          "resources": {
+            "backendDetails": null,
+            "backend_details": {
+              "model": {
+                "name": "gpt-35-turbo",
+                "version": "latest"
+              }
+            }
+          }
+        },
+        "createdAt": "2024-07-03T12:44:22Z",
+        "modifiedAt": "2024-07-16T12:44:19Z",
+        "submissionTime": "2024-07-03T12:44:51Z",
+        "startTime": "2024-07-03T12:45:56Z",
+        "completionTime": null
+      }
+      
+
+ +#### Install the required packages + +```bash +# install foundation models package +npm install @sap-ai-sdk/foundation-models +``` +#### Simple Chat completion + +```TS +const client = new OpenAiChatClient({ modelName: 'gpt-35-turbo' }); +const response = await client.run({ + messages: [ + { + role: 'user', + content: 'Where is the deepest place on earth located' + } + ] + }) +``` + +It is also possible to create a chat client by passing a `deploymentId` instead of the `modelName`. ## Support, Feedback, Contribution -This project is open to feature requests/suggestions, bug reports etc. via [GitHub issues](https://github.com/SAP/ai-sdk-js/issues). +This project is open to feature requests/suggestions, bug reports etc. via [GitHub issues](https://github.com/SAP/ai-sdk-js/issues). Contribution and feedback are encouraged and always welcome. For more information about how to contribute, the project structure, as well as additional contribution information, see our [Contribution Guidelines](CONTRIBUTING.md). From 30d1478560306f5187a6ef78c4b11bdc0824f6fb Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Mon, 16 Sep 2024 13:49:35 +0200 Subject: [PATCH 02/30] chore:minor updates --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index ef03e7ae..83768a13 100644 --- a/README.md +++ b/README.md @@ -119,10 +119,11 @@ The OpenAI client can be used to send chat completion or embedding requests to t # install foundation models package npm install @sap-ai-sdk/foundation-models ``` + #### Simple Chat completion ```TS -const client = new OpenAiChatClient({ modelName: 'gpt-35-turbo' }); +const client = new OpenAiChatClient({ deploymentId: 'd123456abcdefg' }); const response = await client.run({ messages: [ { @@ -133,7 +134,7 @@ const response = await client.run({ }) ``` -It is also possible to create a chat client by passing a `deploymentId` instead of the `modelName`. +It is also possible to create a chat client by passing a `modelName`instead of the `deploymentId`. ## Support, Feedback, Contribution From 5ca2a7b7ee9d6942d99e60e24e2a82655be34a46 Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Mon, 16 Sep 2024 14:02:59 +0200 Subject: [PATCH 03/30] chore:Add documentation for embedding client --- README.md | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 83768a13..0e02afbc 100644 --- a/README.md +++ b/README.md @@ -120,9 +120,11 @@ The OpenAI client can be used to send chat completion or embedding requests to t npm install @sap-ai-sdk/foundation-models ``` -#### Simple Chat completion +#### Chat completion example ```TS +import { OpenAiChatClient } from '@sap-ai-sdk/foundation-models'; + const client = new OpenAiChatClient({ deploymentId: 'd123456abcdefg' }); const response = await client.run({ messages: [ @@ -136,6 +138,17 @@ const response = await client.run({ It is also possible to create a chat client by passing a `modelName`instead of the `deploymentId`. +#### Embedding example + +```TS +import { OpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; + +const client = new OpenAiEmbeddingClient('text-embedding-ada-002'); +const response = await client.run({ + input: ['AI is fascinating'] + }); +``` + ## Support, Feedback, Contribution This project is open to feature requests/suggestions, bug reports etc. via [GitHub issues](https://github.com/SAP/ai-sdk-js/issues). From 9bccc16628cb18855eb434ac7bdd0ac385636cda Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Mon, 16 Sep 2024 15:40:48 +0200 Subject: [PATCH 04/30] chore: switch to the correct readme --- README.md | 83 ---------------------------- packages/foundation-models/README.md | 49 +++++++++++++++- 2 files changed, 47 insertions(+), 85 deletions(-) diff --git a/README.md b/README.md index 0e02afbc..174630ff 100644 --- a/README.md +++ b/README.md @@ -66,89 +66,6 @@ $ npm install @sap-ai-sdk/orchestration We have created a sample project demonstrating the different clients' usage of the SAP Cloud SDK for AI for TypeScript/JavaScript. The [project README](./sample-code/README.md) outlines the set-up needed to build and run it locally. -### OpenAI client - -The OpenAI client can be used to send chat completion or embedding requests to the OpenAI model deployed in SAP Generative AI Hub. - -#### Prerequisites - -- A deployed OpenAI model in AI Core. - - [How to deploy a model to AI Core](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-deployment-for-generative-ai-model-in-sap-ai-core) - -
An example deployed model from the AI Core /deployments endpoint -
-        {
-        "id": "d123456abcdefg",
-        "deploymentUrl": "https://api.ai.region.aws.ml.hana.ondemand.com/v2/inference/deployments/d123456abcdefg",
-        "configurationId": "12345-123-123-123-123456abcdefg",
-        "configurationName": "gpt-35-turbo",
-        "scenarioId": "foundation-models",
-        "status": "RUNNING",
-        "statusMessage": null,
-        "targetStatus": "RUNNING",
-        "lastOperation": "CREATE",
-        "latestRunningConfigurationId": "12345-123-123-123-123456abcdefg",
-        "ttl": null,
-        "details": {
-          "scaling": {
-            "backendDetails": null,
-            "backend_details": {
-            }
-          },
-          "resources": {
-            "backendDetails": null,
-            "backend_details": {
-              "model": {
-                "name": "gpt-35-turbo",
-                "version": "latest"
-              }
-            }
-          }
-        },
-        "createdAt": "2024-07-03T12:44:22Z",
-        "modifiedAt": "2024-07-16T12:44:19Z",
-        "submissionTime": "2024-07-03T12:44:51Z",
-        "startTime": "2024-07-03T12:45:56Z",
-        "completionTime": null
-      }
-      
-
- -#### Install the required packages - -```bash -# install foundation models package -npm install @sap-ai-sdk/foundation-models -``` - -#### Chat completion example - -```TS -import { OpenAiChatClient } from '@sap-ai-sdk/foundation-models'; - -const client = new OpenAiChatClient({ deploymentId: 'd123456abcdefg' }); -const response = await client.run({ - messages: [ - { - role: 'user', - content: 'Where is the deepest place on earth located' - } - ] - }) -``` - -It is also possible to create a chat client by passing a `modelName`instead of the `deploymentId`. - -#### Embedding example - -```TS -import { OpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; - -const client = new OpenAiEmbeddingClient('text-embedding-ada-002'); -const response = await client.run({ - input: ['AI is fascinating'] - }); -``` - ## Support, Feedback, Contribution This project is open to feature requests/suggestions, bug reports etc. via [GitHub issues](https://github.com/SAP/ai-sdk-js/issues). diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 770e01e4..c57a0fe2 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -17,9 +17,54 @@ $ npm install @sap-ai-sdk/foundation-models - Create a `.env` file in the sample-code directory. - Add an entry `AICORE_SERVICE_KEY=''`. -## Usage +## OpenAI client - +The OpenAI client can be used to send chat completion or embedding requests to the OpenAI model deployed in SAP Generative AI Hub. + +### Prerequisites + +- A deployed OpenAI model in SAP Generative AI Hub. + - You can use the [`DeploymentApi`](../ai-api/README.md#deploymentapi) from `@sap-ai-sdk/ai-api` to deploy a model to SAP Generative AI Hub. +- `sap-ai-sdk/foundation-models` package installed in your project. + +### Chat completion Client Usage + +```TS +import { OpenAiChatClient } from '@sap-ai-sdk/foundation-models'; + +const client = new OpenAiChatClient('gpt-35-turbo'); +const response = await client.run({ + messages: [ + { + role: 'user', + content: 'Where is the deepest place on earth located' + } + ] + }) +const responseContent = response.getContent(); +``` + +It is also possible to create a chat client by passing a `deploymentId` instead of a `modelName`. + +On the response obtained from the client, you could also use convenience functions like `getContent()`, `getFinishReason()` and `getTokenUsage()` to get easy access to the certain parts of the response. + +### Embedding Client Usage + +```TS +import { OpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; + +const client = new OpenAiEmbeddingClient({ deploymentId: 'd123456abcdefg' }); +const response = await client.run({ + input: 'AI is fascinating' + }); +const embedding = response.data[0]?.embedding; +``` + +It is also possible to create an embedding client by passing a `modelName` instead of a `deploymentId`. + +## Caching + +The deployment information which includes deployment id and properties like model name and model version is also cached by default for 5 mins. So, if you create an OpenAI client with a `modelName`, the deployment information is fetched from the cache if it is available. ## Support, Feedback, Contribution From 988073b1b2b06857c479f38b04770843271a00c1 Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Tue, 17 Sep 2024 17:22:46 +0200 Subject: [PATCH 05/30] Update packages/foundation-models/README.md Co-authored-by: Matthias Kuhr <52661546+MatKuhr@users.noreply.github.com> --- packages/foundation-models/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index c57a0fe2..e0896555 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -19,7 +19,7 @@ $ npm install @sap-ai-sdk/foundation-models ## OpenAI client -The OpenAI client can be used to send chat completion or embedding requests to the OpenAI model deployed in SAP Generative AI Hub. +The OpenAI client can be used to send chat completion or embedding requests to OpenAI models deployed in SAP Generative AI Hub. ### Prerequisites From 86dcfbc5afd167059196d4d5e177a6c3cde529dd Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Wed, 18 Sep 2024 09:53:06 +0200 Subject: [PATCH 06/30] chore: cleanup root readme --- README.md | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 174630ff..cd60d099 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ # SAP Cloud SDK for AI -Integrate chat completion into your business applications with SAP Cloud SDK for GenAI Hub. Leverage the Generative AI Hub of SAP AI Core to make use of templating, grounding, data masking, content filtering and more. Setup your SAP AI Core instance with SAP Cloud SDK for AI Core. +Integrate chat completion into your business applications with SAP Cloud SDK for generative AI hub. Leverage the generative AI hub of SAP AI Core to make use of templating, grounding, data masking, content filtering and more. Setup your SAP AI Core instance with SAP Cloud SDK for AI Core. ## Disclaimer ⚠️ @@ -27,6 +27,16 @@ This project is currently in an experimental state. All functionality and naming This project publishes multiple packages and is managed using [pnpm](https://pnpm.io/) +### @sap-ai-sdk/orchestration + +This package incorporates generative AI orchestration capabilities into your AI activities in SAP AI Core and SAP AI Launchpad. + +#### Installation + +``` +$ npm install @sap-ai-sdk/orchestration +``` + ### @sap-ai-sdk/ai-api This package provides tools to manage your scenarios and workflows in SAP AI Core. @@ -36,7 +46,7 @@ This package provides tools to manage your scenarios and workflows in SAP AI Cor - Deploy inference endpoints for your trained models. - Register custom Docker registries, sync AI content from your own git repositories, and register your own object storage for training data and model artifacts. -To install the AI Core package in your project, run: +#### Installation ``` $ npm install @sap-ai-sdk/ai-api @@ -46,22 +56,12 @@ $ npm install @sap-ai-sdk/ai-api This package incorporates generative AI foundation models into your AI activities in SAP AI Core and SAP AI Launchpad. -To install the Gen AI Hub package in your project, run: +#### Installation ``` $ npm install @sap-ai-sdk/foundation-models ``` -### @sap-ai-sdk/orchestration - -This package incorporates generative AI orchestration capabilities into your AI activities in SAP AI Core and SAP AI Launchpad. - -To install the Gen AI Hub package in your project, run: - -``` -$ npm install @sap-ai-sdk/orchestration -``` - ## SAP Cloud SDK for AI Sample Project We have created a sample project demonstrating the different clients' usage of the SAP Cloud SDK for AI for TypeScript/JavaScript. The [project README](./sample-code/README.md) outlines the set-up needed to build and run it locally. From ecebfa4fe25a6062fc26f1a4abe92c6bfd1548da Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Wed, 18 Sep 2024 09:58:50 +0200 Subject: [PATCH 07/30] chore: minor fix --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index cd60d099..8283b9f7 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ # SAP Cloud SDK for AI -Integrate chat completion into your business applications with SAP Cloud SDK for generative AI hub. Leverage the generative AI hub of SAP AI Core to make use of templating, grounding, data masking, content filtering and more. Setup your SAP AI Core instance with SAP Cloud SDK for AI Core. +Integrate chat completion into your business applications with SAP Cloud SDK for AI. Leverage the generative AI hub of SAP AI Core to make use of templating, grounding, data masking, content filtering and more. Setup your SAP AI Core instance with SAP Cloud SDK for AI Core. ## Disclaimer ⚠️ From fc1dc2a3e097d2bc7130fdf0bcfdf028c04fc6a0 Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Wed, 18 Sep 2024 11:29:40 +0200 Subject: [PATCH 08/30] Update packages/foundation-models/README.md Co-authored-by: Zhongpin Wang --- packages/foundation-models/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index e0896555..39565c62 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -55,8 +55,8 @@ import { OpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; const client = new OpenAiEmbeddingClient({ deploymentId: 'd123456abcdefg' }); const response = await client.run({ - input: 'AI is fascinating' - }); + input: 'AI is fascinating' +}); const embedding = response.data[0]?.embedding; ``` From e11322eecf15c19fc2661ba0af06f2fd67443df4 Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Wed, 18 Sep 2024 11:29:51 +0200 Subject: [PATCH 09/30] Update packages/foundation-models/README.md Co-authored-by: Zhongpin Wang --- packages/foundation-models/README.md | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 39565c62..14b04bdb 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -34,13 +34,11 @@ import { OpenAiChatClient } from '@sap-ai-sdk/foundation-models'; const client = new OpenAiChatClient('gpt-35-turbo'); const response = await client.run({ - messages: [ - { - role: 'user', - content: 'Where is the deepest place on earth located' - } - ] - }) + messages: [{ + role: 'user', + content: 'Where is the deepest place on earth located' + }] +}); const responseContent = response.getContent(); ``` From 676526b4102a80bb3aa79a5a490b47ac7d352bc3 Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Wed, 18 Sep 2024 11:30:52 +0200 Subject: [PATCH 10/30] chore: progress till now --- packages/foundation-models/README.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index e0896555..3d061cc1 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -17,22 +17,22 @@ $ npm install @sap-ai-sdk/foundation-models - Create a `.env` file in the sample-code directory. - Add an entry `AICORE_SERVICE_KEY=''`. -## OpenAI client +## Azure OpenAI client -The OpenAI client can be used to send chat completion or embedding requests to OpenAI models deployed in SAP Generative AI Hub. +The Azure OpenAI client can be used to send chat completion or embedding requests to OpenAI models deployed in SAP generative AI hub. ### Prerequisites -- A deployed OpenAI model in SAP Generative AI Hub. - - You can use the [`DeploymentApi`](../ai-api/README.md#deploymentapi) from `@sap-ai-sdk/ai-api` to deploy a model to SAP Generative AI Hub. +- A deployed OpenAI model in SAP generative AI hub. + - You can use the [`DeploymentApi`](../ai-api/README.md#deploymentapi) from `@sap-ai-sdk/ai-api` to deploy a model to SAP generative AI hub. - `sap-ai-sdk/foundation-models` package installed in your project. -### Chat completion Client Usage +### Azure OpenAI chat client usage ```TS -import { OpenAiChatClient } from '@sap-ai-sdk/foundation-models'; +import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models'; -const client = new OpenAiChatClient('gpt-35-turbo'); +const client = new AzureOpenAiChatClient('gpt-35-turbo'); const response = await client.run({ messages: [ { @@ -48,7 +48,7 @@ It is also possible to create a chat client by passing a `deploymentId` instead On the response obtained from the client, you could also use convenience functions like `getContent()`, `getFinishReason()` and `getTokenUsage()` to get easy access to the certain parts of the response. -### Embedding Client Usage +### Azure OpenAI Embedding client usage ```TS import { OpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; From ebb4468d06a6b94cf5c45cbd3a407d84d9da3d7d Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Wed, 18 Sep 2024 15:02:34 +0200 Subject: [PATCH 11/30] chore:progress till now --- packages/foundation-models/README.md | 109 +++++++++++++++++++++++---- 1 file changed, 96 insertions(+), 13 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 06accee5..ecd2613a 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -19,50 +19,129 @@ $ npm install @sap-ai-sdk/foundation-models ## Azure OpenAI client -The Azure OpenAI client can be used to send chat completion or embedding requests to OpenAI models deployed in SAP generative AI hub. +The Azure OpenAI client allows you to send chat completion or embedding requests to OpenAI models deployed in SAP generative AI hub. + +To make a generative AI model available for use, you need to create a deployment. You can create a deployment for each model and model version, as well as for each resource group that you want to use with generative AI hub. + +After the deployment is complete, you have a `deploymentUrl`, which can be used across your organization to access the model. ### Prerequisites - A deployed OpenAI model in SAP generative AI hub. - - You can use the [`DeploymentApi`](../ai-api/README.md#deploymentapi) from `@sap-ai-sdk/ai-api` to deploy a model to SAP generative AI hub. + - You can use the [`DeploymentApi`](../ai-api/README.md#deploymentapi) from `@sap-ai-sdk/ai-api` to deploy a model to SAP generative AI hub. For more information, see [here](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-deployment-for-generative-ai-model-in-sap-ai-core) - `sap-ai-sdk/foundation-models` package installed in your project. ### Azure OpenAI chat client usage +Use the `AzureOpenAiChatClient` to send chat completion requests to an OpenAI model deployed in SAP generative AI hub. +You can pass the model name as a parameter to the client, the sdk will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request. + +The deployment information which includes deployment ID and properties like model name and model version is also cached by default for 5 mins so that performance is not impacted by fetching the deployment information for every request. + ```TS import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models'; const client = new AzureOpenAiChatClient('gpt-35-turbo'); const response = await client.run({ - messages: [{ - role: 'user', - content: 'Where is the deepest place on earth located' - }] + messages: [ + { + role: 'user', + content: 'Where is the deepest place on earth located' + } + ] }); + const responseContent = response.getContent(); + ``` -It is also possible to create a chat client by passing a `deploymentId` instead of a `modelName`. +To send a chat completion request with system messages use: + +```TS +import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models'; + +const client = new AzureOpenAiChatClient('gpt-35-turbo'); +const response = await client.run({ + messages: [ + { + role: 'system', + content: 'You are a friendly chatbot.' + }, + { + role: 'user', + content: 'Hi, my name is Isa' + }, + { + role: 'system', + content: 'Hi Isa! It is nice to meet you. Is there anything I can help you with today?' + }, + { + role: 'user', + content: 'Can you remind me, What is my name?' + } + ] +}); +const responseContent = response.getContent(); +const tokenUsage = response.getTokenUsage(); + +logger.info( + `Total tokens consumed by the request: ${tokenUsage.total_tokens}\n` + + `Input prompt tokens consumed: ${tokenUsage.prompt_tokens}\n` + + `Output text completion tokens consumed: ${tokenUsage.completion_tokens}\n` +); + +``` + +You can see that one can send multiple messages in a single request. +This is useful in providing a history of the conversation to the model. + +#### Obtaining a client using deployment ID -On the response obtained from the client, you could also use convenience functions like `getContent()`, `getFinishReason()` and `getTokenUsage()` to get easy access to the certain parts of the response. +In case you want to obtain the model by using the ID of your deployment on your own you can pass it instead of a model name. + +```TS +import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models'; + +const response = new AzureOpenAiChatClient({ deploymentId: 'd1234' }).run({ + messages: [ + { + 'role':'user', + 'content': 'What is the capital of France?' + } + ] +}); +``` ### Azure OpenAI Embedding client usage ```TS import { OpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; -const client = new OpenAiEmbeddingClient({ deploymentId: 'd123456abcdefg' }); +const client = new OpenAiEmbeddingClient('text-embedding-ada-002'); const response = await client.run({ input: 'AI is fascinating' }); -const embedding = response.data[0]?.embedding; +const embedding = response.getEmbedding(); ``` -It is also possible to create an embedding client by passing a `modelName` instead of a `deploymentId`. +#### Obtaining a client using deployment ID + +In case you want to obtain the model by using the ID of your deployment on your own you can pass it instead of a model name. + +```TS +import { OpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; -## Caching +const response = new OpenAiEmbeddingClient({ deploymentId: 'd1234' }).run({ + messages: [ + { + 'role':'user', + 'content': 'Where is the deepest place on earth located?' + } + ] +}); +``` -The deployment information which includes deployment id and properties like model name and model version is also cached by default for 5 mins. So, if you create an OpenAI client with a `modelName`, the deployment information is fetched from the cache if it is available. +It is also possible to create an embedding client by passing a `modelName` instead of a `deploymentId`. ## Support, Feedback, Contribution @@ -73,3 +152,7 @@ Contribution and feedback are encouraged and always welcome. For more informatio ## License The SAP Cloud SDK for AI is released under the [Apache License Version 2.0.](http://www.apache.org/licenses/) + +``` + +``` From 4fd7c92e10c07d76acb01051eee4d952d515ad1f Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Wed, 18 Sep 2024 15:08:38 +0200 Subject: [PATCH 12/30] chore: address review comments --- packages/foundation-models/README.md | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index ecd2613a..3e5337a3 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -23,7 +23,7 @@ The Azure OpenAI client allows you to send chat completion or embedding requests To make a generative AI model available for use, you need to create a deployment. You can create a deployment for each model and model version, as well as for each resource group that you want to use with generative AI hub. -After the deployment is complete, you have a `deploymentUrl`, which can be used across your organization to access the model. +After the deployment is complete, you have a `deploymentUrl`, which can be used to access the model. ### Prerequisites @@ -114,10 +114,15 @@ const response = new AzureOpenAiChatClient({ deploymentId: 'd1234' }).run({ ### Azure OpenAI Embedding client usage +Use the `AzureOpenAiEmbeddingClient` to send chat completion requests to an OpenAI model deployed in SAP generative AI hub. +You can pass the model name as a parameter to the client, the sdk will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request. + +The deployment information which includes deployment ID and properties like model name and model version is also cached by default for 5 mins so that performance is not impacted by fetching the deployment information for every request. + ```TS -import { OpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; +import { AzureOpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; -const client = new OpenAiEmbeddingClient('text-embedding-ada-002'); +const client = new AzureOpenAiEmbeddingClient('text-embedding-ada-002'); const response = await client.run({ input: 'AI is fascinating' }); @@ -129,9 +134,9 @@ const embedding = response.getEmbedding(); In case you want to obtain the model by using the ID of your deployment on your own you can pass it instead of a model name. ```TS -import { OpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; +import { AzureOpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; -const response = new OpenAiEmbeddingClient({ deploymentId: 'd1234' }).run({ +const response = new AzureOpenAiEmbeddingClient({ deploymentId: 'd1234' }).run({ messages: [ { 'role':'user', @@ -141,8 +146,6 @@ const response = new OpenAiEmbeddingClient({ deploymentId: 'd1234' }).run({ }); ``` -It is also possible to create an embedding client by passing a `modelName` instead of a `deploymentId`. - ## Support, Feedback, Contribution This project is open to feature requests/suggestions, bug reports etc. via [GitHub issues](https://github.com/SAP/ai-sdk-js/issues). From eb17cf7f9a0a4791555a89a1b2d09e4c70e3145e Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Wed, 18 Sep 2024 15:14:55 +0200 Subject: [PATCH 13/30] chore: minor cleanup --- packages/foundation-models/README.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 3e5337a3..f2c7fbaf 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -17,11 +17,12 @@ $ npm install @sap-ai-sdk/foundation-models - Create a `.env` file in the sample-code directory. - Add an entry `AICORE_SERVICE_KEY=''`. -## Azure OpenAI client +## Azure OpenAI Client The Azure OpenAI client allows you to send chat completion or embedding requests to OpenAI models deployed in SAP generative AI hub. -To make a generative AI model available for use, you need to create a deployment. You can create a deployment for each model and model version, as well as for each resource group that you want to use with generative AI hub. +To make a generative AI model available for use, you need to create a deployment. +You can create a deployment for each model and model version, as well as for each resource group that you want to use with generative AI hub. After the deployment is complete, you have a `deploymentUrl`, which can be used to access the model. @@ -31,7 +32,7 @@ After the deployment is complete, you have a `deploymentUrl`, which can be used - You can use the [`DeploymentApi`](../ai-api/README.md#deploymentapi) from `@sap-ai-sdk/ai-api` to deploy a model to SAP generative AI hub. For more information, see [here](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-deployment-for-generative-ai-model-in-sap-ai-core) - `sap-ai-sdk/foundation-models` package installed in your project. -### Azure OpenAI chat client usage +### Usage of Azure OpenAI Chat Client Use the `AzureOpenAiChatClient` to send chat completion requests to an OpenAI model deployed in SAP generative AI hub. You can pass the model name as a parameter to the client, the sdk will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request. @@ -97,7 +98,7 @@ This is useful in providing a history of the conversation to the model. #### Obtaining a client using deployment ID -In case you want to obtain the model by using the ID of your deployment on your own you can pass it instead of a model name. +In case you want to obtain the model by using the ID of your deployment on your own you can pass it instead of a model name: ```TS import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models'; @@ -112,7 +113,7 @@ const response = new AzureOpenAiChatClient({ deploymentId: 'd1234' }).run({ }); ``` -### Azure OpenAI Embedding client usage +### Usage of Azure OpenAI Embedding Client Use the `AzureOpenAiEmbeddingClient` to send chat completion requests to an OpenAI model deployed in SAP generative AI hub. You can pass the model name as a parameter to the client, the sdk will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request. @@ -131,7 +132,7 @@ const embedding = response.getEmbedding(); #### Obtaining a client using deployment ID -In case you want to obtain the model by using the ID of your deployment on your own you can pass it instead of a model name. +In case you want to obtain the model by using the ID of your deployment on your own you can pass it instead of a model name: ```TS import { AzureOpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; From 73f59af7f370f2fb1684677e7eaa4e5982d73133 Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Wed, 18 Sep 2024 15:36:08 +0200 Subject: [PATCH 14/30] chore: address review comments --- packages/foundation-models/README.md | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index f2c7fbaf..1b3f3fc9 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -80,8 +80,11 @@ const response = await client.run({ role: 'user', content: 'Can you remind me, What is my name?' } - ] + ], + max_tokens: 100, + temperature: 0.0 }); + const responseContent = response.getContent(); const tokenUsage = response.getTokenUsage(); @@ -94,6 +97,8 @@ logger.info( ``` You can see that one can send multiple messages in a single request. +Some parameters like `max_tokens` and `temperature` can be also be passed to the request to control the completion behavior. +Refer to `AzureOpenAiChatCompletionParameters` interface for more parameters that can be passed to the chat completion request. This is useful in providing a history of the conversation to the model. #### Obtaining a client using deployment ID @@ -103,7 +108,7 @@ In case you want to obtain the model by using the ID of your deployment on your ```TS import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models'; -const response = new AzureOpenAiChatClient({ deploymentId: 'd1234' }).run({ +const response = await new AzureOpenAiChatClient({ deploymentId: 'd1234' }).run({ messages: [ { 'role':'user', @@ -137,7 +142,7 @@ In case you want to obtain the model by using the ID of your deployment on your ```TS import { AzureOpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; -const response = new AzureOpenAiEmbeddingClient({ deploymentId: 'd1234' }).run({ +const response = await new AzureOpenAiEmbeddingClient({ deploymentId: 'd1234' }).run({ messages: [ { 'role':'user', From 10e0946d1cf107d9623b20e909f0a56781151fb9 Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Wed, 18 Sep 2024 15:46:11 +0200 Subject: [PATCH 15/30] chore: address review comments --- packages/foundation-models/README.md | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 1b3f3fc9..00dac62a 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -97,9 +97,10 @@ logger.info( ``` You can see that one can send multiple messages in a single request. +This is useful in providing a history of the conversation to the model. + Some parameters like `max_tokens` and `temperature` can be also be passed to the request to control the completion behavior. Refer to `AzureOpenAiChatCompletionParameters` interface for more parameters that can be passed to the chat completion request. -This is useful in providing a history of the conversation to the model. #### Obtaining a client using deployment ID @@ -118,6 +119,12 @@ const response = await new AzureOpenAiChatClient({ deploymentId: 'd1234' }).run( }); ``` +You could also pass the [resource group](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/resource-groups?q=resource+group) name to the client along with the deployment ID: + +```TS +const client = new AzureOpenAiChatClient({ deploymentId: 'd1234' , resourceGroup: 'rg1234' }) +``` + ### Usage of Azure OpenAI Embedding Client Use the `AzureOpenAiEmbeddingClient` to send chat completion requests to an OpenAI model deployed in SAP generative AI hub. @@ -152,6 +159,12 @@ const response = await new AzureOpenAiEmbeddingClient({ deploymentId: 'd1234' }) }); ``` +You could also pass the [resource group](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/resource-groups?q=resource+group) name to the client along with the deployment ID: + +```TS +const client = new AzureOpenAiEmbeddingClient({ deploymentId: 'd1234' , resourceGroup: 'rg1234' }) +``` + ## Support, Feedback, Contribution This project is open to feature requests/suggestions, bug reports etc. via [GitHub issues](https://github.com/SAP/ai-sdk-js/issues). From 3ac37881878f734479ea5c6dc1c2dea7928b779f Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Wed, 18 Sep 2024 15:51:02 +0200 Subject: [PATCH 16/30] chore: minor fixes --- packages/foundation-models/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 00dac62a..57d589ca 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -19,13 +19,13 @@ $ npm install @sap-ai-sdk/foundation-models ## Azure OpenAI Client -The Azure OpenAI client allows you to send chat completion or embedding requests to OpenAI models deployed in SAP generative AI hub. - To make a generative AI model available for use, you need to create a deployment. You can create a deployment for each model and model version, as well as for each resource group that you want to use with generative AI hub. After the deployment is complete, you have a `deploymentUrl`, which can be used to access the model. +The Azure OpenAI client allows you to send chat completion or embedding requests to OpenAI models deployed in SAP generative AI hub. + ### Prerequisites - A deployed OpenAI model in SAP generative AI hub. @@ -100,7 +100,7 @@ You can see that one can send multiple messages in a single request. This is useful in providing a history of the conversation to the model. Some parameters like `max_tokens` and `temperature` can be also be passed to the request to control the completion behavior. -Refer to `AzureOpenAiChatCompletionParameters` interface for more parameters that can be passed to the chat completion request. +Refer to `AzureOpenAiChatCompletionParameters` interface for knowing more parameters that can be passed to the chat completion request. #### Obtaining a client using deployment ID @@ -127,7 +127,7 @@ const client = new AzureOpenAiChatClient({ deploymentId: 'd1234' , resourceGroup ### Usage of Azure OpenAI Embedding Client -Use the `AzureOpenAiEmbeddingClient` to send chat completion requests to an OpenAI model deployed in SAP generative AI hub. +Use the `AzureOpenAiEmbeddingClient` to send embedding requests to an OpenAI model deployed in SAP generative AI hub. You can pass the model name as a parameter to the client, the sdk will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request. The deployment information which includes deployment ID and properties like model name and model version is also cached by default for 5 mins so that performance is not impacted by fetching the deployment information for every request. From 9e840d97c856f4ec18f11b09f2022e4919d12e9d Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Thu, 19 Sep 2024 09:48:38 +0200 Subject: [PATCH 17/30] Update README.md Co-authored-by: Matthias Kuhr <52661546+MatKuhr@users.noreply.github.com> --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 8283b9f7..2f7d62a3 100644 --- a/README.md +++ b/README.md @@ -29,7 +29,7 @@ This project publishes multiple packages and is managed using [pnpm](https://pnp ### @sap-ai-sdk/orchestration -This package incorporates generative AI orchestration capabilities into your AI activities in SAP AI Core and SAP AI Launchpad. +This package incorporates generative AI [orchestration](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/orchestration) capabilities into your AI activities in SAP AI Core and SAP AI Launchpad. #### Installation From 97a826bc113d278dd6b9bcea70d6d8ef26bdff2a Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Thu, 19 Sep 2024 09:50:17 +0200 Subject: [PATCH 18/30] chore: review comments --- packages/foundation-models/README.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 57d589ca..7252c7e7 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -174,7 +174,3 @@ Contribution and feedback are encouraged and always welcome. For more informatio ## License The SAP Cloud SDK for AI is released under the [Apache License Version 2.0.](http://www.apache.org/licenses/) - -``` - -``` From 7563401f3e3f185f47b7592a743ba11652371970 Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Thu, 19 Sep 2024 10:01:31 +0200 Subject: [PATCH 19/30] Update packages/foundation-models/README.md Co-authored-by: Matthias Kuhr <52661546+MatKuhr@users.noreply.github.com> --- packages/foundation-models/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 7252c7e7..8a58abcd 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -73,7 +73,7 @@ const response = await client.run({ content: 'Hi, my name is Isa' }, { - role: 'system', + role: 'assistant', content: 'Hi Isa! It is nice to meet you. Is there anything I can help you with today?' }, { From ba86fba7cf0de9b54e6701cc19c388ad8a64ad72 Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Thu, 19 Sep 2024 10:23:44 +0200 Subject: [PATCH 20/30] chore: address review comments --- packages/foundation-models/README.md | 31 ++++++---------------------- 1 file changed, 6 insertions(+), 25 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 8a58abcd..d6a15909 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -102,14 +102,16 @@ This is useful in providing a history of the conversation to the model. Some parameters like `max_tokens` and `temperature` can be also be passed to the request to control the completion behavior. Refer to `AzureOpenAiChatCompletionParameters` interface for knowing more parameters that can be passed to the chat completion request. -#### Obtaining a client using deployment ID +#### Obtaining a client using Resource Groups -In case you want to obtain the model by using the ID of your deployment on your own you can pass it instead of a model name: +[Resource groups](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/resource-groups?q=resource+group) represent a virtual collection of related resources within the scope of one SAP AI Core tenant. + +You can also obtain a model on your own by using a resource group and ID of your deployment instead of a model name: ```TS import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models'; -const response = await new AzureOpenAiChatClient({ deploymentId: 'd1234' }).run({ +const response = await new AzureOpenAiChatClient({ deploymentId: 'd1234' , resourceGroup: 'rg1234' }).run({ messages: [ { 'role':'user', @@ -117,12 +119,7 @@ const response = await new AzureOpenAiChatClient({ deploymentId: 'd1234' }).run( } ] }); -``` -You could also pass the [resource group](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/resource-groups?q=resource+group) name to the client along with the deployment ID: - -```TS -const client = new AzureOpenAiChatClient({ deploymentId: 'd1234' , resourceGroup: 'rg1234' }) ``` ### Usage of Azure OpenAI Embedding Client @@ -140,26 +137,10 @@ const response = await client.run({ input: 'AI is fascinating' }); const embedding = response.getEmbedding(); -``` - -#### Obtaining a client using deployment ID - -In case you want to obtain the model by using the ID of your deployment on your own you can pass it instead of a model name: - -```TS -import { AzureOpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models'; -const response = await new AzureOpenAiEmbeddingClient({ deploymentId: 'd1234' }).run({ - messages: [ - { - 'role':'user', - 'content': 'Where is the deepest place on earth located?' - } - ] -}); ``` -You could also pass the [resource group](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/resource-groups?q=resource+group) name to the client along with the deployment ID: +Like in [Azure OpenAI Chat client](#obtaining-a-client-using-resource-groups), you could also pass the [resource group](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/resource-groups?q=resource+group) name to the client along with the deployment ID instead of the model name. ```TS const client = new AzureOpenAiEmbeddingClient({ deploymentId: 'd1234' , resourceGroup: 'rg1234' }) From e8b9804aca8c9ffe4a08f7572b18c0e92a96b191 Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Thu, 19 Sep 2024 11:57:43 +0200 Subject: [PATCH 21/30] Update packages/foundation-models/README.md Co-authored-by: Zhongpin Wang --- packages/foundation-models/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index d6a15909..8cac4def 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -29,7 +29,7 @@ The Azure OpenAI client allows you to send chat completion or embedding requests ### Prerequisites - A deployed OpenAI model in SAP generative AI hub. - - You can use the [`DeploymentApi`](../ai-api/README.md#deploymentapi) from `@sap-ai-sdk/ai-api` to deploy a model to SAP generative AI hub. For more information, see [here](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-deployment-for-generative-ai-model-in-sap-ai-core) + - You can use the [`DeploymentApi`](../ai-api/README.md#deploymentapi) from `@sap-ai-sdk/ai-api` to deploy a model to SAP generative AI hub. For more information, see [here](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/create-deployment-for-generative-ai-model-in-sap-ai-core). - `sap-ai-sdk/foundation-models` package installed in your project. ### Usage of Azure OpenAI Chat Client From f038ed3f6f1b7913aa9d53646968e0851ce759d9 Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Thu, 19 Sep 2024 11:57:56 +0200 Subject: [PATCH 22/30] Update packages/foundation-models/README.md Co-authored-by: Zhongpin Wang --- packages/foundation-models/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 8cac4def..09397183 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -35,7 +35,7 @@ The Azure OpenAI client allows you to send chat completion or embedding requests ### Usage of Azure OpenAI Chat Client Use the `AzureOpenAiChatClient` to send chat completion requests to an OpenAI model deployed in SAP generative AI hub. -You can pass the model name as a parameter to the client, the sdk will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request. +You can pass the model name as a parameter to the client, the SDK will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request. The deployment information which includes deployment ID and properties like model name and model version is also cached by default for 5 mins so that performance is not impacted by fetching the deployment information for every request. From fce00b3faadc7bf69c9839a2f456571ad20846e5 Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Thu, 19 Sep 2024 11:58:31 +0200 Subject: [PATCH 23/30] Update packages/foundation-models/README.md Co-authored-by: Zhongpin Wang --- packages/foundation-models/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 09397183..4e7971fb 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -56,7 +56,7 @@ const responseContent = response.getContent(); ``` -To send a chat completion request with system messages use: +Use the following snippet to send a chat completion request with system messages: ```TS import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models'; From d0cc6bd64b81eef252b4325a7fd08431cbce6873 Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Thu, 19 Sep 2024 11:58:41 +0200 Subject: [PATCH 24/30] Update packages/foundation-models/README.md Co-authored-by: Zhongpin Wang --- packages/foundation-models/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 4e7971fb..d5905857 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -99,7 +99,7 @@ logger.info( You can see that one can send multiple messages in a single request. This is useful in providing a history of the conversation to the model. -Some parameters like `max_tokens` and `temperature` can be also be passed to the request to control the completion behavior. +Pass parameters like `max_tokens` and `temperature` to the request to control the completion behavior. Refer to `AzureOpenAiChatCompletionParameters` interface for knowing more parameters that can be passed to the chat completion request. #### Obtaining a client using Resource Groups From 112f33a8f500388efefdf933847b29c4d8d12a28 Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Thu, 19 Sep 2024 11:58:55 +0200 Subject: [PATCH 25/30] Update packages/foundation-models/README.md Co-authored-by: Zhongpin Wang --- packages/foundation-models/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index d5905857..67ae7d3c 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -96,8 +96,8 @@ logger.info( ``` -You can see that one can send multiple messages in a single request. -This is useful in providing a history of the conversation to the model. +It is possible to send multiple messages in a single request. +This feature is useful for providing a history of the conversation to the model. Pass parameters like `max_tokens` and `temperature` to the request to control the completion behavior. Refer to `AzureOpenAiChatCompletionParameters` interface for knowing more parameters that can be passed to the chat completion request. From 2b3b0f3ed161c31c1070c8985e76b68d959568af Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Thu, 19 Sep 2024 11:59:02 +0200 Subject: [PATCH 26/30] Update packages/foundation-models/README.md Co-authored-by: Zhongpin Wang --- packages/foundation-models/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index 67ae7d3c..ef17f90b 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -102,7 +102,7 @@ This feature is useful for providing a history of the conversation to the model. Pass parameters like `max_tokens` and `temperature` to the request to control the completion behavior. Refer to `AzureOpenAiChatCompletionParameters` interface for knowing more parameters that can be passed to the chat completion request. -#### Obtaining a client using Resource Groups +#### Obtaining a Client using Resource Groups [Resource groups](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/resource-groups?q=resource+group) represent a virtual collection of related resources within the scope of one SAP AI Core tenant. From 271e30663c8d3ff7d6e592ccf5631cdd5b951c1e Mon Sep 17 00:00:00 2001 From: cloud-sdk-js Date: Thu, 19 Sep 2024 09:59:37 +0000 Subject: [PATCH 27/30] fix: Changes from lint --- packages/foundation-models/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index ef17f90b..ed5a9287 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -96,7 +96,7 @@ logger.info( ``` -It is possible to send multiple messages in a single request. +It is possible to send multiple messages in a single request. This feature is useful for providing a history of the conversation to the model. Pass parameters like `max_tokens` and `temperature` to the request to control the completion behavior. From 1ece4aa83797d42bf6108f90d3ef5aa730ee1dcb Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Thu, 19 Sep 2024 12:08:55 +0200 Subject: [PATCH 28/30] Update packages/foundation-models/README.md Co-authored-by: Zhongpin Wang --- packages/foundation-models/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index ed5a9287..ecec2cfa 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -37,7 +37,7 @@ The Azure OpenAI client allows you to send chat completion or embedding requests Use the `AzureOpenAiChatClient` to send chat completion requests to an OpenAI model deployed in SAP generative AI hub. You can pass the model name as a parameter to the client, the SDK will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request. -The deployment information which includes deployment ID and properties like model name and model version is also cached by default for 5 mins so that performance is not impacted by fetching the deployment information for every request. +By default, the system caches the deployment information, which includes the deployment ID and properties such as the model name and model version, for 5 minutes to prevent performance impact from fetching the deployment information for every request. ```TS import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models'; From dc6a3d010d26dd12a00464f64ea2549efd4ccd10 Mon Sep 17 00:00:00 2001 From: KavithaSiva <32287936+KavithaSiva@users.noreply.github.com> Date: Thu, 19 Sep 2024 12:09:07 +0200 Subject: [PATCH 29/30] Update packages/foundation-models/README.md Co-authored-by: Zhongpin Wang --- packages/foundation-models/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index ecec2cfa..e10a8dfa 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -106,7 +106,7 @@ Refer to `AzureOpenAiChatCompletionParameters` interface for knowing more parame [Resource groups](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/resource-groups?q=resource+group) represent a virtual collection of related resources within the scope of one SAP AI Core tenant. -You can also obtain a model on your own by using a resource group and ID of your deployment instead of a model name: +You can use the deployment ID and resource group as an alternative to obtaining a model by using the model name. ```TS import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models'; From bf5214ee0bfe7b2578bb96d3f978e437e04b60a0 Mon Sep 17 00:00:00 2001 From: KavithaSiva Date: Thu, 19 Sep 2024 12:16:12 +0200 Subject: [PATCH 30/30] chore: address review comments --- packages/foundation-models/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/packages/foundation-models/README.md b/packages/foundation-models/README.md index e10a8dfa..911872d7 100644 --- a/packages/foundation-models/README.md +++ b/packages/foundation-models/README.md @@ -37,7 +37,7 @@ The Azure OpenAI client allows you to send chat completion or embedding requests Use the `AzureOpenAiChatClient` to send chat completion requests to an OpenAI model deployed in SAP generative AI hub. You can pass the model name as a parameter to the client, the SDK will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request. -By default, the system caches the deployment information, which includes the deployment ID and properties such as the model name and model version, for 5 minutes to prevent performance impact from fetching the deployment information for every request. +By default, the SDK caches the deployment information, which includes the deployment ID and properties such as the model name and model version, for 5 minutes to prevent performance impact from fetching the deployment information for every request. ```TS import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models'; @@ -125,9 +125,9 @@ const response = await new AzureOpenAiChatClient({ deploymentId: 'd1234' , resou ### Usage of Azure OpenAI Embedding Client Use the `AzureOpenAiEmbeddingClient` to send embedding requests to an OpenAI model deployed in SAP generative AI hub. -You can pass the model name as a parameter to the client, the sdk will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request. +You can pass the model name as a parameter to the client, the SDK will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request. -The deployment information which includes deployment ID and properties like model name and model version is also cached by default for 5 mins so that performance is not impacted by fetching the deployment information for every request. +By default, the SDK caches the deployment information, which includes the deployment ID and properties such as the model name and model version, for 5 minutes to prevent performance impact from fetching the deployment information for every request. ```TS import { AzureOpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models';