diff --git a/README.md b/README.md index 8e0c21b5d7..aa6a43b028 100644 --- a/README.md +++ b/README.md @@ -554,6 +554,8 @@ Platform usage. * [Composer DAG Load Generator](tools/cloud-composer-dag-generator) - This is an automatic DAG generator tool which can be used to create test workload on a cloud composer environmnet and to test differents airflows configurations or to do fine tune using the composer/airflow metrics. +* [Gradio and Generative AI Example](examples/genai-gradio-example) - The example code allows developers + to create rapid Generative AI PoC applications with Gradio and Gen AI agents. ## Contributing diff --git a/examples/genai-gradio-example/README.md b/examples/genai-gradio-example/README.md new file mode 100644 index 0000000000..e0ce6767d0 --- /dev/null +++ b/examples/genai-gradio-example/README.md @@ -0,0 +1,104 @@ +``` +Copyright 2023 Google. This software is provided as-is, without warranty or +representation for any use or purpose. Your use of it is subject to your +agreement with Google. +``` +## Technology Stack +- Google Cloud Run +- Google Artifact Registry +- Google Cloud Storage +- Google Speech to Text +- Vertex AI Conversation +- Dialogflow CX +- Dialogflow CX Agent +- Google Data Store +- Google Secret Manager +- Gradio + +## GCP Project Setup + +### Creating a Project in the Google Cloud Platform Console + +If you haven't already created a project, create one now. Projects enable you to +manage all Google Cloud Platform resources for your app, including deployment, +access control, billing, and services. + +1. Open the [Cloud Platform Console][cloud-console]. +2. In the drop-down menu at the top, select **NEW PROJECT**. +3. Give your project a name. +4. Make a note of the project ID, which might be different from the project + name. The project ID is used in commands and in configurations. + +[cloud-console]: https://console.cloud.google.com/ + +### Enabling billing for your project. + +If you haven't already enabled billing for your project, [enable +billing][enable-billing] now. Enabling billing allows is required to use Cloud +Bigtable and to create VM instances. + +[enable-billing]: https://console.cloud.google.com/project/_/settings + +### Install the Google Cloud SDK. + +If you haven't already installed the Google Cloud SDK, [install the Google +Cloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you +to create and manage resources on Google Cloud Platform. + +[cloud-sdk]: https://cloud.google.com/sdk/ + +### Setting Google Application Default Credentials + +Set your [Google Application Default +Credentials][application-default-credentials] by [initializing the Google Cloud +SDK][cloud-sdk-init] with the command: + +``` + gcloud init +``` + +Generate a credentials file by running the +[application-default login](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login) +command: + +``` + gcloud auth application-default login +``` + +[cloud-sdk-init]: https://cloud.google.com/sdk/docs/initializing + +[application-default-credentials]: https://developers.google.com/identity/protocols/application-default-credentials + +## Upload your data to a Cloud Storage bucket +Follow these [instructions][instructions] to upload your pdf documents +or pdf manuals to be used in this example + +[instructions]:https://cloud.google.com/storage/docs/uploading-objects + +## Create a Generative AI Agent +Follow the instructions at this [link][link] and perform the following: +1. Create Data Stores: Select information that you would like the Vertex AI Search and Conversation to query +2. Create an Agent: Create the Dialogflow CX agent that queries the Data Store +3. Test the agent in the simulator +4. Take note of you agent link by going to [Dialogflow CX Console][Dialogflow CX Console] and see the information about the agent you created + +[link]: https://cloud.google.com/generative-ai-app-builder/docs/a +[Dialogflow CX Console]:https://cloud.google.com/dialogflow/cx/docs/concept/console#agent + +### Dialogflow CX Agent Data Stores +Data Stores are used to find answers for end-user's questions. +Data Stores are a collection documents, each of which reference your data. + +For this particular example data store will consist of the following characteristics: +1. Your organizational documents or manuals. +2. The data store type will be unstructured in a pdf format +3. The data is uploaded without metadata for simplicity. +Only need to point the import to the gcp bucket folder where the pdf files are. +Their extension will decide their type. + +When an end-user asks the agent a question, the agent searches for an answer from the +given source content and summarizes the findings into a coherent agent response. +It also provides supporting links to the sources of the response for the end-user to learn more. + + + diff --git a/examples/genai-gradio-example/frontend/Dockerfile b/examples/genai-gradio-example/frontend/Dockerfile new file mode 100644 index 0000000000..bc5e130740 --- /dev/null +++ b/examples/genai-gradio-example/frontend/Dockerfile @@ -0,0 +1,10 @@ +FROM python:3.10 +WORKDIR /usr/src/app + +RUN apt-get update && apt-get install -y +RUN apt-get install ffmpeg -y +COPY requirements.txt ./ +RUN pip install --no-cache-dir -r requirements.txt +COPY . . +EXPOSE 8080 +CMD [ "python", "./app/main.py" ] \ No newline at end of file diff --git a/examples/genai-gradio-example/frontend/README.md b/examples/genai-gradio-example/frontend/README.md new file mode 100644 index 0000000000..243ee000b5 --- /dev/null +++ b/examples/genai-gradio-example/frontend/README.md @@ -0,0 +1,64 @@ +``` +Copyright 2023 Google. This software is provided as-is, without warranty or +representation for any use or purpose. Your use of it is subject to your +agreement with Google. +``` + +# Running voice-activated chatBot on Linux + +# Enable APIs on GCP +The following is a list of the APIs that need to be enabled in GCP +- Speech-to-text API. +- Secret Manager API + +## Install Libraries +``` +$ sudo apt-get install python3-pip python-dev +$ sudo apt-get install ffmpeg +``` + +## Install Dependencies +``` +$ pip install gradio==3.38.0 --use-deprecated=legacy-resolver +$ pip install --upgrade google-cloud-speech==2.21.0 +$ pip install torch +``` + +If you face space issue try increasing CPU and Memory or you can create temp folder to cache torch in different folder as shown below: +``` +$ pip install --cache-dir=/home/user/tmp torch +``` + +``` +$ pip3 install google-cloud-secret-manager==2.10.0 +$ pip3 install google-cloud-speech==2.21.0 +``` + +## Service Account Access +The service account you're using to run the application shpuld have below IAM roles: +- Secret Manager Aecret Accessor +- Cloud Run Invoker (required to call LLM middleware if middleware is deployed in Cloud Run) + +## Replace following parameters in ./app/config.ini to your value +``` +SECRET_ID_IN_SECRET_MANAGER_FOR_PASSWORD: this is the secret id for the password secret created in the secret manager. +SECRET_MANAGER_PROJECT_ID: gcp project id where the application will load secrets from. +LLM_MIDDLEWARE_HOST_URL: host url for LLM middleware. URL looks like https://llm-middleware-sdnkdn12.a.run.app if the application is deployed in Cloud Run. +``` + +## How to run the application +From the frontend/app folder run the following command +``` +$ python3 main.py +``` +## How to enable SSL and expose the application outside of VM +To enable SSL encryption and expose the application port outside of the VM, use the following launch command +in main.py: + +``` +$ bot_interface.launch(server_name="0.0.0.0", + share=False, + ssl_certfile="localhost.crt", + ssl_keyfile="localhost.key", + ssl_verify=False) +``` \ No newline at end of file diff --git a/examples/genai-gradio-example/frontend/app/bot_interface.py b/examples/genai-gradio-example/frontend/app/bot_interface.py new file mode 100644 index 0000000000..ff8ebaee96 --- /dev/null +++ b/examples/genai-gradio-example/frontend/app/bot_interface.py @@ -0,0 +1,130 @@ +# ============================================================================ +# Copyright 2023 Google. This software is provided as-is, without warranty or +# representation for any use or purpose. Your use of it is subject to your +# agreement with Google. +# ============================================================================ + +"""Generate components to render user interface.""" + +from event_handlers import EventHandlers as handlers +import gradio as gr + + +class BotInterface: + """Defines user interface.""" + + def initialize(self, config) -> any: + """Initialize and returns the chatbot interface with event handlers attached. + + Args: + config: A configparser having gradio configurations loaded from + config.ini file. + + Returns: + gr (Gradio): Gradio block consisting UI components. + """ + event_handlers = handlers(config) + with gr.Blocks( + css=".gradio-container {border: 1px solid #e5e5e5}" + ) as bot_interface: + session = gr.State([]) + with gr.Row(): + with gr.Column(scale=10): + with gr.Row(scale=1): + chatbot = gr.Chatbot( + [(None, config["initial-message"])], + elem_id="chatbot", + show_label=False, + height=640, + ) + with gr.Row(): + with gr.Column(scale=12): + user_input = gr.Textbox( + show_label=False, + placeholder=config["text-placeholder"], + container=False, + ) + with gr.Column(min_width=70, scale=1): + submit_btn = gr.Button("Send") + with gr.Row(): + audio_input = gr.Audio(source="microphone", type="filepath") + with gr.Row(): + clear_btn = gr.Button("Start a new conversation") + with gr.Row(): + with gr.Column(scale=1): + source_location = gr.Textbox( + label=config["location-label"], + show_label=True, + interactive=False, + show_copy_button=True, + lines=10, + max_lines=10, + ) + input_msg = user_input.submit( + event_handlers.add_user_input, + [chatbot, user_input], + [chatbot, user_input], + queue=False, + ).then( + event_handlers.bot_response, + [chatbot, session], + [ + chatbot, + session, + source_location, + ], + ) + submit_done = submit_btn.click( + event_handlers.add_user_input, + [chatbot, user_input], + [chatbot, user_input], + queue=False, + ).then( + event_handlers.bot_response, + [chatbot, session], + [ + chatbot, + session, + source_location, + ], + ) + clear_btn.click( + event_handlers.clear_history, + [chatbot, session], + [ + chatbot, + session, + source_location, + ], + queue=False, + ) + input_msg.then( + lambda: gr.update(interactive=True), None, [user_input], queue=False + ) + submit_done.then( + lambda: gr.update(interactive=True), None, [user_input], queue=False + ) + inputs_event = ( + audio_input.stop_recording( + event_handlers.transcribe_file, audio_input, user_input + ) + .then( + event_handlers.add_user_input, + [chatbot, user_input], + [chatbot, user_input], + queue=False, + ) + .then( + event_handlers.bot_response, + [chatbot, session], + [ + chatbot, + session, + source_location, + ], + ) + ) + inputs_event.then( + lambda: gr.update(interactive=True), None, [user_input], queue=False + ) + return bot_interface diff --git a/examples/genai-gradio-example/frontend/app/config.ini b/examples/genai-gradio-example/frontend/app/config.ini new file mode 100644 index 0000000000..55d4c76e13 --- /dev/null +++ b/examples/genai-gradio-example/frontend/app/config.ini @@ -0,0 +1,21 @@ +# ============================================================================ +# Copyright 2023 Google. This software is provided as-is, without warranty or +# representation for any use or purpose. Your use of it is subject to your +# agreement with Google. +# ============================================================================ + +[gradio] +title=Gradio powered by Gen AI +initial-message=Hello, how may I help you? +text-placeholder=Enter Text +location-label=Source Location +error-response=Apologies, something went wrong. Please start a new conversation or try after sometime. + +[secret] +gradio-user=admin +password-secret-id=fea-admin-pass +sm-project-id=ford-upshift-fea + +[api-config] +llm-mw-endpoint=https://fea-llm-middleware-2z5uoxo2wa-uc.a.run.app +get-llm-response=/predict diff --git a/examples/genai-gradio-example/frontend/app/event_handlers.py b/examples/genai-gradio-example/frontend/app/event_handlers.py new file mode 100644 index 0000000000..7c75ccea2c --- /dev/null +++ b/examples/genai-gradio-example/frontend/app/event_handlers.py @@ -0,0 +1,130 @@ +# ============================================================================ +# Copyright 2023 Google. This software is provided as-is, without warranty or +# representation for any use or purpose. Your use of it is subject to your +# agreement with Google. +# ============================================================================ + +"""This module defines handlers(methods) to handle event's raised by UI components.""" + +import configparser +from google.cloud import speech +import gradio as gr +from middleware import MiddlewareService +from models.session_info import SessionInfo + + +class EventHandlers: + """A class to define event handlers for chat bot UI components. + + Attributes: + config: A configparser having gradio configurations loaded from + config.ini file. + middleware_service: A middleware service to get bot responses. + """ + + config: configparser.ConfigParser() + middleware_service: MiddlewareService + + def __init__(self, config: configparser.ConfigParser): + self.config = config + self.middleware_service = MiddlewareService() + + def transcribe_file(self, speech_file: str) -> speech.RecognizeResponse: + """Transcribe the audio file and returns converted text. + + Args: + speech_file (str): Path to speech file. + + Returns: + text (speech.RecognizeResponse): Generated string for the + input speech. + """ + text = "" + client = speech.SpeechClient() + + with open(speech_file, "rb") as audio_file: + content = audio_file.read() + + audio = speech.RecognitionAudio(content=content) + config = speech.RecognitionConfig( + encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16, + language_code="en-US", + ) + + response = client.recognize(config=config, audio=audio) + + # Each result is for a consecutive portion of the audio. Iterate through + # them to get the transcripts for the entire audio file. + for result in response.results: + # The first alternative is the most likely one for this portion. + text = result.alternatives[0].transcript + + return text + + def add_user_input(self, history, text): + """Adds user input to chat history. + + Args: + history (dict): A dictionary that stores user's chat. + text (str): String input to chat bot. + + Returns: + history (dict): A dictionary that stores user's chat. + """ + if bool(text): + history = history + [(text, None)] + return history, gr.update(value="", interactive=False) + + def clear_history(self, history, session): + """Clear chat history. + + Args: + history (dict): A dictionary that stores user's chat. + session: current session object. + + Returns: + history (dict): A dictionary that stores user's chat. + session: current session object. + source_location(None): To clear source location textbox. + """ + history = [(None, self.config["initial-message"])] + session = [] + session.append(SessionInfo()) + return history, session, None + + def bot_response(self, history, session): + """Returns session, source location and chat history with the updated Bot response. + + Args: + history (dict): A dictionary that stores user's chat. + session: current session object. + + Returns: + history (dict): A dictionary that stores user's chat. + session: current session object. + source_location: string representing manual/spec location. Usually a + gcs link. + """ + if not session: + session.append(SessionInfo()) + + session_info: SessionInfo = session[0] + response = "" + source_location = "" + if history[-1][0] is None: + return history, session + else: + model_response = self.middleware_service.get_bot_response( + history[-1][0], session_info + ) + if model_response is None: + response = self.config["error-response"] + else: + response = model_response.output_text + source_location = model_response.gcs_link + + if not bool(response): + response = self.config["error-response"] + + history[-1][1] = response + return (history, session, source_location) diff --git a/examples/genai-gradio-example/frontend/app/main.py b/examples/genai-gradio-example/frontend/app/main.py new file mode 100644 index 0000000000..e89602ecf2 --- /dev/null +++ b/examples/genai-gradio-example/frontend/app/main.py @@ -0,0 +1,53 @@ +# ============================================================================ +# Copyright 2023 Google. This software is provided as-is, without warranty or +# representation for any use or purpose. Your use of it is subject to your +# agreement with Google. +# ============================================================================ + +"""This module is an entry point for the application to launch user interface.""" + +import configparser +import os +import sys +from bot_interface import BotInterface as bot +from secret_manager import SecretManager + + +class BotAgent: + """A class to launch Bot agent. + + Attributes: + gradio_config: A configparser having gradio configurations loaded from + config.ini file. + secret_config: A configparser having secret configurations loaded from + config.ini file. + """ + + gradio_config: configparser.SectionProxy + secret_config: configparser.SectionProxy + + def __init__(self): + """Initialize Bot Agent and loads application configurations.""" + + configuration = configparser.ConfigParser() + current_dir = os.path.dirname(os.path.abspath(sys.argv[0])) + configuration.read(os.path.join(current_dir, "config.ini")) + self.gradio_config = configuration["gradio"] + self.secret_config = configuration["secret"] + + def init_bot(self): + """An entry point of application to launch Bot.""" + + interface = bot().initialize(self.gradio_config) + interface.title = self.gradio_config["title"] + return interface + + +if __name__ == "__main__": + bot_agent = BotAgent() + bot_app = bot_agent.init_bot() + user = bot_agent.secret_config["gradio-user"] + password = SecretManager(bot_agent.secret_config).access_secret_version( + bot_agent.secret_config["password-secret-id"] + ) + bot_app.launch(server_name="0.0.0.0", server_port=8080, auth=(user, password)) diff --git a/examples/genai-gradio-example/frontend/app/middleware.py b/examples/genai-gradio-example/frontend/app/middleware.py new file mode 100644 index 0000000000..7ebbcde251 --- /dev/null +++ b/examples/genai-gradio-example/frontend/app/middleware.py @@ -0,0 +1,75 @@ +# ============================================================================ +# Copyright 2023 Google. This software is provided as-is, without warranty or +# representation for any use or purpose. Your use of it is subject to your +# agreement with Google. +# ============================================================================ + +"""Module to call Middleware APIs to generate Bot responses.""" + +import configparser +import logging +import google.oauth2.id_token +from models.middleware_reponse import ModelResponse +from models.session_info import SessionInfo +import requests + + +class MiddlewareService: + """A class defines methods to get responses from Middleware API. + + Attributes: + api_config: A configparser having middleware api configurations loaded + from config.ini file. + """ + + def __init__(self): + config = configparser.ConfigParser() + config.read("config.ini") + self.api_config = config["api-config"] + + def get_id_token(self, url: str): + """generate identity token for authentication. + + Args: + url (str): Middleware API host url to generate token for. + + Returns: + id_token: Identity token. + """ + auth_req = google.auth.transport.requests.Request() + id_token = google.oauth2.id_token.fetch_id_token(auth_req, url) + return id_token + + def get_bot_response( + self, user_input: str, session_info: SessionInfo + ) -> ModelResponse: + """Calls middleware to get bot responses. + + Args: + user_input (str): User input. + session_info: A SessionInfo object storing session id. information. + + Returns: + model_res: A ModelResponse object storing LLM response and source + location. + """ + try: + headers = { + "Authorization": ( + f"Bearer {self.get_id_token(self.api_config['llm-mw-endpoint'])}" + ) + } + response = requests.post( + self.api_config["llm-mw-endpoint"] + + self.api_config["get-llm-response"], + json={"user_input": user_input, "session_id": session_info.id}, + headers=headers, + ) + response.raise_for_status() + jsonres = response.json() + model_res = ModelResponse() + model_res.output_text = jsonres["output"] + model_res.gcs_link = jsonres["gcs_link"] if "gcs_link" in jsonres else "" + return model_res + except requests.exceptions.HTTPError as err: + logging.error(err) diff --git a/examples/genai-gradio-example/frontend/app/models/middleware_reponse.py b/examples/genai-gradio-example/frontend/app/models/middleware_reponse.py new file mode 100644 index 0000000000..181f161696 --- /dev/null +++ b/examples/genai-gradio-example/frontend/app/models/middleware_reponse.py @@ -0,0 +1,23 @@ +# ============================================================================ +# Copyright 2023 Google. This software is provided as-is, without warranty or +# representation for any use or purpose. Your use of it is subject to your +# agreement with Google. +# ============================================================================ + +"""A module represents the middleware response.""" + + +class ModelResponse: + """A Model class for middleware response. + + Attributes: + output_text: represents a plain text response. + gcs_link: stores links to manual/specifications. + """ + + output_text: str + gcs_link: str + + def __init__(self) -> None: + self.output_text = "" + self.gcs_link = "" diff --git a/examples/genai-gradio-example/frontend/app/models/session_info.py b/examples/genai-gradio-example/frontend/app/models/session_info.py new file mode 100644 index 0000000000..7c9882e601 --- /dev/null +++ b/examples/genai-gradio-example/frontend/app/models/session_info.py @@ -0,0 +1,21 @@ +# ============================================================================ +# Copyright 2023 Google. This software is provided as-is, without warranty or +# representation for any use or purpose. Your use of it is subject to your +# agreement with Google. +# ============================================================================ + +"""A module represents the session state.""" + + +import uuid + + +class SessionInfo: + """A class to represent user's session state. + + Attributes: + id: a unique id of the user's session. + """ + + def __init__(self) -> None: + self.id = str(uuid.uuid4()) diff --git a/examples/genai-gradio-example/frontend/app/secret_manager.py b/examples/genai-gradio-example/frontend/app/secret_manager.py new file mode 100644 index 0000000000..37257eef84 --- /dev/null +++ b/examples/genai-gradio-example/frontend/app/secret_manager.py @@ -0,0 +1,43 @@ +# ============================================================================ +# Copyright 2023 Google. This software is provided as-is, without warranty or +# representation for any use or purpose. Your use of it is subject to your +# agreement with Google. +# ============================================================================ + +"""This module communicates with secret manager to get secret values.""" + +import configparser +from google.cloud import secretmanager + + +class SecretManager: + """SecretManager class to load secrets from GCP secret manager service. + + Attributes: + config: A configparser having secret configurations loaded from config.ini + file. + """ + + def __init__(self, config: configparser): + """Initialize class and load secret manager configurations. + + Args: + config: A configparser having secret configurations loaded from + config.ini file. + """ + self.config = config + + def access_secret_version(self, secret_id: str, version_id="latest") -> str: + """Read secret value from secret manager. + + Args: + secret_id: A string id from secret manager to identify secret. + version_id: A string id representing secret version. + + Returns: + Secret value. + """ + client = secretmanager.SecretManagerServiceClient() + name = f"projects/{self.config['sm-project-id']}/secrets/{secret_id}/versions/{version_id}" + response = client.access_secret_version(name=name) + return response.payload.data.decode("UTF-8") diff --git a/examples/genai-gradio-example/frontend/requirements.txt b/examples/genai-gradio-example/frontend/requirements.txt new file mode 100644 index 0000000000..55c430d252 --- /dev/null +++ b/examples/genai-gradio-example/frontend/requirements.txt @@ -0,0 +1,5 @@ +gradio==3.38.0 +google-cloud-speech==2.21.0 +google-cloud-secret-manager==2.10.0 +torch + diff --git a/examples/genai-gradio-example/llm-middleware/Dockerfile b/examples/genai-gradio-example/llm-middleware/Dockerfile new file mode 100644 index 0000000000..21ed9086e5 --- /dev/null +++ b/examples/genai-gradio-example/llm-middleware/Dockerfile @@ -0,0 +1,14 @@ + +FROM python:3.11-slim + +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt + +# Copy local code to the container image. +ENV APP_HOME /app +ENV PORT 8080 +WORKDIR $APP_HOME +COPY ./app ./ + +# Install production dependencies. +CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 app:app \ No newline at end of file diff --git a/examples/genai-gradio-example/llm-middleware/README.md b/examples/genai-gradio-example/llm-middleware/README.md new file mode 100644 index 0000000000..28be18e77c --- /dev/null +++ b/examples/genai-gradio-example/llm-middleware/README.md @@ -0,0 +1,47 @@ +``` +Copyright 2023 Google. This software is provided as-is, without warranty or +representation for any use or purpose. Your use of it is subject to your +agreement with Google. +``` + +# LLM Middleware +The Service calls the Gen AI agent to get response from Large language Models over Dialogflow CX APIs. + +## Library +Install the dialogflow cx library before executing the code. Follow the link https://cloud.google.com/dialogflow/cx/docs/reference/library/python for more information. + +``` +pip3 install google-cloud-dialogflow-cx==1.21.0 +``` + +## Replace following parameters in ./app/config.ini to your values +``` +DIALOGFLOW_CX_PROJECT_ID: gcp project id where dialogflow agent is running. +DIALOGFLOW_LOCATION_ID: gcp region of your dialogflow agent. It is generally 'global' for global serving agents. +DIALOGFLOW_CX_AGENT_ID: the unique id of the dialogflow agent. +``` + +## How to run the application +From the llm-middleware/app folder run the following command +``` +$ python3 app.py +``` + +## API Endpoint +1. \predict +``` +Request Payload: +{ + "user_input" : "How to?", + "session_id" : "amdm244" +} +``` + +``` +Response Payload: +{ + "output": "I'm not sure what you mean. Can you rephrase your question?", + "gcs_link": "https://storage.cloud.google.com/docs/sample.pdf", + "success": true +} +``` diff --git a/examples/genai-gradio-example/llm-middleware/app/app.py b/examples/genai-gradio-example/llm-middleware/app/app.py new file mode 100644 index 0000000000..b756404708 --- /dev/null +++ b/examples/genai-gradio-example/llm-middleware/app/app.py @@ -0,0 +1,41 @@ +# ============================================================================ +# Copyright 2023 Google. This software is provided as-is, without warranty or +# representation for any use or purpose. Your use of it is subject to your +# agreement with Google. +# ============================================================================ + +"""A module to expose middleware api endpoints.""" + + +import configparser +import logging +import os +from detect_intent import DetectIntent +from flask import Flask, jsonify, request +from requests import RequestException + +config = configparser.ConfigParser() +config.read("config.ini") +dialogflow_config = config["dialogflow"] + +app = Flask(__name__) + + +@app.route("/predict", methods=["POST"]) +def predict(): + """Endpoint to get responses from s/LLM Model/Gen AI Agent.""" + try: + data = request.get_json(force=True) + user_input = data["user_input"] + session_id = data["session_id"] + output, gcs_link = DetectIntent(dialogflow_config).get_llm_response( + user_input, session_id + ) + return jsonify({"success": True, "output": output, "gcs_link": gcs_link}) + except RequestException as ex: + logging.error(ex) + + +if __name__ == "__main__": + port = int(os.environ.get("PORT", 8080)) + app.run(debug=True, host="0.0.0.0", port=port) diff --git a/examples/genai-gradio-example/llm-middleware/app/config.ini b/examples/genai-gradio-example/llm-middleware/app/config.ini new file mode 100644 index 0000000000..44a4573948 --- /dev/null +++ b/examples/genai-gradio-example/llm-middleware/app/config.ini @@ -0,0 +1,5 @@ +[dialogflow] +project-id=DIALOGFLOW_CX_PROJECT_ID +location-id=DIALOGFLOW_LOCATION_ID +agent-id=DIALOGFLOW_CX_AGENT_ID +language-code=en-us \ No newline at end of file diff --git a/examples/genai-gradio-example/llm-middleware/app/detect_intent.py b/examples/genai-gradio-example/llm-middleware/app/detect_intent.py new file mode 100644 index 0000000000..92730df733 --- /dev/null +++ b/examples/genai-gradio-example/llm-middleware/app/detect_intent.py @@ -0,0 +1,88 @@ +# ============================================================================ +# Copyright 2023 Google. This software is provided as-is, without warranty or +# representation for any use or purpose. Your use of it is subject to your +# agreement with Google. +# ============================================================================ + +"""Module to fetch responses from Gen AI agent.""" + + +import configparser +from google.cloud.dialogflowcx_v3beta1.services.agents import AgentsClient +from google.cloud.dialogflowcx_v3beta1.services.sessions import SessionsClient +from google.cloud.dialogflowcx_v3beta1.types import session + + +class DetectIntent: + """A class to provide LLM reponse. + + Attributes: + config: A configparser having dialogflow configurations loaded from + config.ini file. + """ + + config: configparser.ConfigParser() + + def __init__(self, config) -> None: + self.config = config + + def get_llm_response(self, query: str, session_id: str) -> str: + """Get the reponse from Gen AI agent. + + Args: + query (str): User's query. + session_id: Unique session id. + + Returns: + str: LLM reponse. + """ + project_id = self.config["project-id"] + location_id = self.config["location-id"] + agent_id = self.config["agent-id"] + agent = f"projects/{project_id}/locations/{location_id}/agents/{agent_id}" + texts = [query] + language_code = self.config["language-code"] + + return self.detect_intent(agent, session_id, texts, language_code) + + def detect_intent(self, agent, session_id, texts, language_code) -> str: + """Detects intent and get the reponse from LLM. + + Args: + agent (str): Dialogflow agent to connect. + session_id: Unique session id. + texts: list of user queries. + language_code: the langaue code to be used to communicate with + agent. + + Returns: + ModelResponse: LLM reponse. + """ + session_path = f"{agent}/sessions/{session_id}" + client_options = None + agent_components = AgentsClient.parse_agent_path(agent) + location_id = agent_components["location"] + if location_id != "global": + api_endpoint = f"{location_id}-dialogflow.googleapis.com:443" + client_options = {"api_endpoint": api_endpoint} + session_client = SessionsClient(client_options=client_options) + + output = "" + links = "" + for text in texts: + text_input = session.TextInput(text=text) + query_input = session.QueryInput( + text=text_input, language_code=language_code + ) + request = session.DetectIntentRequest( + session=session_path, query_input=query_input + ) + response = session_client.detect_intent(request=request) + for msg in response.query_result.response_messages: + if msg.text and msg.text.text: + output = output.join(msg.text.text) + if msg.payload is not None and msg.payload["richContent"] is not None: + for rc in msg.payload["richContent"]: + links = links.join(rc[0]["actionLink"]) + "\n" + + return output, links diff --git a/examples/genai-gradio-example/llm-middleware/requirements.txt b/examples/genai-gradio-example/llm-middleware/requirements.txt new file mode 100644 index 0000000000..7bde90d321 --- /dev/null +++ b/examples/genai-gradio-example/llm-middleware/requirements.txt @@ -0,0 +1,4 @@ +Flask +google-cloud-dialogflow-cx==1.21.0 +gunicorn +requests