From c02204e92f3f3bfadccf677bd1bf4d0c508aea1d Mon Sep 17 00:00:00 2001 From: Takumi Ohyama Date: Tue, 17 Dec 2024 12:35:16 +0000 Subject: [PATCH] Remove old Gen AI eval notebooks --- .../labs/vertex_llm_evaluation.ipynb | 656 ----------------- .../solutions/vertex_llm_evaluation.ipynb | 672 ------------------ 2 files changed, 1328 deletions(-) delete mode 100644 notebooks/vertex_genai/labs/vertex_llm_evaluation.ipynb delete mode 100644 notebooks/vertex_genai/solutions/vertex_llm_evaluation.ipynb diff --git a/notebooks/vertex_genai/labs/vertex_llm_evaluation.ipynb b/notebooks/vertex_genai/labs/vertex_llm_evaluation.ipynb deleted file mode 100644 index 49222962..00000000 --- a/notebooks/vertex_genai/labs/vertex_llm_evaluation.ipynb +++ /dev/null @@ -1,656 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "JAPoU8Sm5E6e", - "tags": [] - }, - "source": [ - "# Evaluate LLMs with Vertex AutoSxS Model Evaluation" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "d975e698c9a4" - }, - "source": [ - "## Learning Objectives\n", - "\n", - "1) Learn how to create evaluation data\n", - "1) Learn how to setup a AutoSxS model evaluation pipeline\n", - "1) Learn how to run the evaluation pipeline job\n", - "1) Learn how to check the autorater judgments\n", - "1) Learn how to evaluate how much AutoSxS is aligned with human judgment\n", - "\n", - "\n", - "In this notebook, we will use Vertex AI Model Evaluation AutoSxS (pronounced Auto Side-by-Side) to compare two LLMs predictions in a summarization task in order to understand which LLM model did a better job at the summarization task. Provided that we have some additional human judgments as to which model is better for part of the dataset, then we will demonstrate how to evaluate the alignment of AutoSxS with human judgment. (Note that Vertex AI Model Evaluation AutoSxS allows you to compare the performance of Google-first-party and Third-party LLMs, provided the model responses are stored in a JSONL evaluation file.)\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "WReHDGG5g0XY" - }, - "source": [ - "## Setup\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "PyQmSRbKA8r-", - "tags": [] - }, - "outputs": [], - "source": [ - "import datetime\n", - "import os\n", - "import pprint\n", - "\n", - "import pandas as pd\n", - "from google.cloud import aiplatform\n", - "from google.protobuf.json_format import MessageToDict\n", - "from IPython.display import HTML, display" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "oM1iC_MfAts1", - "tags": [] - }, - "outputs": [], - "source": [ - "project_id_list = !gcloud config get-value project 2> /dev/null\n", - "PROJECT_ID = project_id_list[0]\n", - "BUCKET = PROJECT_ID\n", - "REGION = \"us-central1\"\n", - "\n", - "# Evaluation data containing competing models responses\n", - "EVALUATION_FILE_URI = \"gs://cloud-training/specialized-training/llm_eval/sum_eval_gemini_dataset_001.jsonl\"\n", - "HUMAN_EVALUATION_FILE_URI = \"gs://cloud-training/specialized-training/llm_eval/sum_human_eval_gemini_dataset_001.jsonl\"\n", - "\n", - "# AutoSxS Vertex Pipeline template\n", - "TEMPLATE_URI = (\n", - " \"https://us-kfp.pkg.dev/ml-pipeline/llm-rlhf/autosxs-template/default\"\n", - ")\n", - "\n", - "print(f\"Your project ID is set to {PROJECT_ID}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "-EcIXiGsCePi" - }, - "source": [ - "Let us make sure the bucket where AutoSxS will export the data exists, and if not, let us create it:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "NIq7R4HZCfIc", - "tags": [] - }, - "outputs": [], - "source": [ - "! gsutil ls gs://{BUCKET} > /dev/null || gsutil mb -l {REGION} -p {PROJECT_ID} gs://{BUCKET}" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "init_aip:mbsdk,all", - "jp-MarkdownHeadingCollapsed": true, - "tags": [] - }, - "source": [ - "At last, let us initialize the `aiplatform` client in the cell below:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "j4KEcQEWROby", - "tags": [] - }, - "outputs": [], - "source": [ - "aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "qoiqIyiMvc3n" - }, - "source": [ - "## Evaluate LLMs using Vertex AI Model Evaluation AutoSxS\n", - "\n", - "Suppose you've obtained your LLM-generated predictions in a summarization task. To evaluate LLMs such as Gemini-Pro on Vertex AI against another using [AutoSXS](https://cloud.google.com/vertex-ai/generative-ai/docs/models/side-by-side-eval), you need to follow these steps for evaluation:\n", - "\n", - "1. **Prepare the Evaluation Dataset**: Gather your prompts, contexts, generated responses and human preference required for the evaluation.\n", - "\n", - "2. **Convert the Evaluation Dataset:** Convert the dataset into the JSONL format and store it in a Cloud Storage bucket. (Alternatively, you can save the dataset to a BigQuery table.)\n", - "\n", - "3. **Run a Model Evaluation Job:** Use Vertex AI to run a model evaluation job to assess the performance of the LLM.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "08d289fa873f" - }, - "source": [ - "### Dataset\n", - "\n", - "The dataset is a modified sample of the [XSum](https://huggingface.co/datasets/EdinburghNLP/xsum) dataset for evaluation of abstractive single-document summarization systems." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ZEIlO0eHbsQh" - }, - "source": [ - "### Read the evaluation data\n", - "\n", - "In this summarization use case, you use `sum_eval_gemini_dataset_001`, a JSONL-formatted evaluation datasets which contains content-response pairs without human preferences.\n", - "\n", - "In the dataset, each row represents a single example. The dataset includes ID fields, such as \"id\" and \"document,\" which are used to identify each unique example. The \"document\" field contains the newspaper articles to be summarized.\n", - "\n", - "While the dataset does not have [data fields](https://cloud.google.com/vertex-ai/docs/generative-ai/models/side-by-side-eval#prep-eval-dataset) for prompts and contexts, it does include pre-generated predictions. These predictions contain the generated response according to the LLMs task, with \"response_a\" and \"response_b\" representing different article summaries generated by two different LLM models.\n", - "\n", - "**Note: For experimentation, you can provide only a few examples. The documentation recommends at least 400 examples to ensure high-quality aggregate metrics.**\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "R-_ettKRxfxT", - "tags": [] - }, - "outputs": [], - "source": [ - "evaluation_gemini_df = pd.read_json(EVALUATION_FILE_URI, lines=True)\n", - "evaluation_gemini_df.head()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "1lZHraNFkDz8" - }, - "source": [ - "### Run a model evaluation job\n", - "\n", - "AutoSxS relays on Vertex AI pipelines to run model evaluation. And here you can see some of the required pipeline parameters:\n", - "\n", - "* `evaluation_dataset` to indicate where the evaluation dataset location. In this case, it is the JSONL Cloud bucket URI.\n", - "\n", - "* `id_colums` to distinguish evaluation examples that are unique. Here, as you can imagine, your have `id` and `document` fields. These fields will be added in the judgment table generated by AutoSxS.\n", - "\n", - "* `task` to indicate the task type you want to evaluate. It can be `summarization` or `question_answer`. In this case you have `summarization`.\n", - "\n", - "* `autorater_prompt_parameters` to configure the autorater task behavior. You can specify inference instructions to guide task completion, as well as setting the inference context to refer during the task execution. For example, for the summarization task below we have that `autorater_prompt_parameters` is specified by a dictionary containing the name of the field containing the summarization context (i.e. the document to summarize) as well as the summarization instruction itself:\n", - "```python\n", - " {\n", - " \"inference_context\": {\"column\": \"document\"},\n", - " \"inference_instruction\": {\"template\": \"Summarize the following text: \"},\n", - " },\n", - "```\n", - "\n", - "Lastly, you have to provide `response_column_a` and `response_column_b` with the names of columns containing predefined predictions in order to calculate the evaluation metrics. In this case, `response_a` and `response_b` respectively. Note that we can simply specify the actual models through the `model_a` and `model_b` fields (as long as these models are stored in Vertex model registry) instead of providing the pre-generated responses (through the `response_column_a` and `response_column_b` fields). To learn more about all supported parameters and their usage, see the [official documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/models/side-by-side-eval#perform-eval).\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise\n", - "\n", - "In the first code cell below, configure the `parameters` argument so that it launches an summarization evaluation job task between the two LLMs responses stored in the evaluation data located at `EVALUATION_FILE_URI`. In the second cell below, configure the `PipelineJob` to launch an AutoSxS Vertex Pipeline using the pipeline template stored at `TEMPLATE_URI`." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Cp7e-hOmNMhA", - "tags": [] - }, - "outputs": [], - "source": [ - "timestamp = str(datetime.datetime.now().timestamp()).replace(\".\", \"\")\n", - "display_name = f\"autosxs-{timestamp}\"\n", - "pipeline_root = os.path.join(\"gs://\", BUCKET, display_name)\n", - "\n", - "parameters = None # TODO" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Bp0YIvSv-zhB" - }, - "source": [ - "After you define the model evaluation parameters, you can run a model evaluation pipeline job using the predifined pipeline template with Vertex AI Python SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "AjFHT5ze9m4L", - "tags": [] - }, - "outputs": [], - "source": [ - "job = aiplatform.PipelineJob(None) # TODO\n", - "job.run(sync=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "PunFdLfqGh0e" - }, - "source": [ - "### Evaluate the results\n", - "\n", - "After the evaluation pipeline successfully run, you can review the evaluation results by looking both at artifacts generated by the pipeline itself in Vertex AI Pipelines UI and in the notebook enviroment using the Vertex AI Python SDK.\n", - "\n", - "AutoSXS produces three types of evaluation results: a judgments table, aggregated metrics, and alignment metrics (if human preferences are provided).\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "J4ShokOI9FDI" - }, - "source": [ - "### AutoSxS Judgments\n", - "\n", - "The judgments table contains metrics that offer insights of LLM performance per each example.\n", - "\n", - "For each response pair, the judgments table includes a `choice` column indicating the better response based on the evaluation criteria used by the autorater.\n", - "\n", - "Each choice has a `confidence score` column between 0 and 1, representing the autorater's level of confidence in the evaluation.\n", - "\n", - "Last but not less important, AutoSXS provides an explanation for why the autorater preferred one response over the other.\n", - "\n", - "Below you have an example of AutoSxS judgments output." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "MakdmpYCmehF", - "tags": [] - }, - "outputs": [], - "source": [ - "online_eval_task = [\n", - " task\n", - " for task in job.task_details\n", - " if task.task_name == \"online-evaluation-pairwise\"\n", - "][0]\n", - "\n", - "\n", - "judgments_uri = MessageToDict(online_eval_task.outputs[\"judgments\"]._pb)[\n", - " \"artifacts\"\n", - "][0][\"uri\"]\n", - "\n", - "judgments_df = pd.read_json(judgments_uri, lines=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The next cell contains a helper function to print a sample from AutoSxS judgments nicely in the notebook." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "_gyM2-i3HHnP", - "tags": [] - }, - "outputs": [], - "source": [ - "def print_autosxs_judgments(df, n=3):\n", - " \"\"\"Print AutoSxS judgments in the notebook\"\"\"\n", - "\n", - " style = \"white-space: pre-wrap; width: 800px; overflow-x: auto;\"\n", - " df = df.sample(n=n)\n", - "\n", - " for index, row in df.iterrows():\n", - " if row[\"confidence\"] >= 0.5:\n", - " display(\n", - " HTML(\n", - " f\"

Document:

{row['document']}
\"\n", - " )\n", - " )\n", - " display(\n", - " HTML(\n", - " f\"

Response A:

{row['response_a']}
\"\n", - " )\n", - " )\n", - " display(\n", - " HTML(\n", - " f\"

Response B:

{row['response_b']}
\"\n", - " )\n", - " )\n", - " display(\n", - " HTML(\n", - " f\"

Explanation:

{row['explanation']}
\"\n", - " )\n", - " )\n", - " display(\n", - " HTML(\n", - " f\"

Confidence score:

{row['confidence']}
\"\n", - " )\n", - " )\n", - " display(HTML(\"
\"))\n", - "\n", - "\n", - "print_autosxs_judgments(judgments_df)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "tJ5PJ9x69KrC" - }, - "source": [ - "### AutoSxS Aggregate metrics\n", - "\n", - "AutoSxS also provides aggregated metrics as an additional evaluation result. These win-rate metrics are calculated by utilizing the judgments table to determine the percentage of times the autorater preferred one model response. \n", - "\n", - "These metrics are relevant for quickly find out which is the best model in the context of the evaluated task.\n", - "\n", - "Below you have an example of AutoSxS Aggregate metrics." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "w2RISjQSJk9R", - "tags": [] - }, - "outputs": [], - "source": [ - "metrics_eval_task = [\n", - " task\n", - " for task in job.task_details\n", - " if task.task_name == \"model-evaluation-text-generation-pairwise\"\n", - "][0]\n", - "\n", - "\n", - "win_rate_metrics = MessageToDict(\n", - " metrics_eval_task.outputs[\"autosxs_metrics\"]._pb\n", - ")[\"artifacts\"][0][\"metadata\"]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The next cell contains a helper function to print AutoSxS aggregated metrics nicely in the notebook." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "def print_aggregated_metrics(scores):\n", - " \"\"\"Print AutoSxS aggregated metrics\"\"\"\n", - "\n", - " score_b = round(win_rate_metrics[\"autosxs_model_b_win_rate\"] * 100)\n", - " display(\n", - " HTML(\n", - " f\"

AutoSxS Autorater prefers {score_b}% of time Model B over Model A

\"\n", - " )\n", - " )\n", - "\n", - "\n", - "print_aggregated_metrics(win_rate_metrics)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "_5mYmHj6poXz" - }, - "source": [ - "### Human-preference alignment metrics\n", - "\n", - "After reviewing the results of your initial AutoSxS evalution, you may wonder about the reliability of the Autorater assessment's alignment with human raters' views.\n", - "\n", - "AutoSxS supports human preference to validate Autorater evaluation.\n", - "\n", - "To check alignment with a human-preference dataset, you need to add the ground truths as a column to the `evaluation_dataset` and pass the column name to `human_preference_column`." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "8vSedvz39-iu" - }, - "source": [ - "#### Read the evaluation data\n", - "\n", - "With respect of evaluation dataset, in this case the `sum_human_eval_gemini_dataset_001` dataset also includes human preferences.\n", - "\n", - "Below you have a sample of the dataset." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "mbfsO2uw9-i5", - "tags": [] - }, - "outputs": [], - "source": [ - "human_evaluation_gemini_df = pd.read_json(HUMAN_EVALUATION_FILE_URI, lines=True)\n", - "human_evaluation_gemini_df.head()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XpmAr6UX-Imb" - }, - "source": [ - "#### Run a model evaluation job\n", - "\n", - "With respect to the AutoSXS pipeline, you must specify the human preference column in the pipeline parameters.\n", - "\n", - "Then, you can run the evaluation pipeline job using the Vertex AI Python SDK as shown below.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise\n", - "\n", - "\n", - "In the first code cell below, configure the `parameters` argument so that it launches a human-alignment evaluation job using the human judgments stored in the evaluation data located at `HUMAN_EVALUATION_FILE_URI` in the `actual` field. In the second cell below, configure the `PipelineJob` to launch the AutoSxS Vertex Pipeline using the pipeline template stored at `TEMPLATE_URI`." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "bFmvFt2a3MtN", - "tags": [] - }, - "outputs": [], - "source": [ - "timestamp = str(datetime.datetime.now().timestamp()).replace(\".\", \"\")\n", - "display_name = f\"autosxs-human-eval-{timestamp}\"\n", - "pipeline_root = os.path.join(\"gs://\", BUCKET, display_name)\n", - "\n", - "parameters = None # TODO" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "KbhIPY-_3SSB", - "tags": [] - }, - "outputs": [], - "source": [ - "job = aiplatform.PipelineJob(None) # TODO\n", - "job.run(sync=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "QTzJ8BaWEusN" - }, - "source": [ - "### Get human-aligned aggregated metrics\n", - "\n", - "Compared with the aggregated metrics you get before, now the pipeline returns additional measurements that utilize human-preference data provided by you.\n", - "\n", - "Below you have a view of the resulting human-aligned aggregated metrics, comparing the win rates for models for both human preferenced and model inferences.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "JLUOJFjA38ja", - "tags": [] - }, - "outputs": [], - "source": [ - "human_eval_task = [\n", - " task\n", - " for task in job.task_details\n", - " if task.task_name == \"model-evaluation-text-generation-pairwise\"\n", - "][0]\n", - "\n", - "human_aligned_metrics = {\n", - " k: round(v, 3)\n", - " for k, v in MessageToDict(human_eval_task.outputs[\"autosxs_metrics\"]._pb)[\n", - " \"artifacts\"\n", - " ][0][\"metadata\"].items()\n", - " if \"win_rate\" in k\n", - "}" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The next cell contains a helper function to print AutoSxS alignment metrics nicely in the notebook." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "def print_human_preference_metrics(metrics):\n", - " \"\"\"Print AutoSxS Human-preference alignment metrics\"\"\"\n", - " display(\n", - " HTML(\n", - " f\"

AutoSxS Autorater prefers {score_b}% of time Model B over Model A

\"\n", - " )\n", - " )\n", - "\n", - "\n", - "pprint.pprint(human_aligned_metrics)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Acknowledgement \n", - "\n", - "This notebook is adapted from a [tutorial](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluate_gemini_with_autosxs.ipynb)\n", - "written by Ivan Nardini." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright 2024 Google LLC\n", - "\n", - "Licensed under the Apache License, Version 2.0 (the \"License\");\n", - "you may not use this file except in compliance with the License.\n", - "You may obtain a copy of the License at\n", - "\n", - " https://www.apache.org/licenses/LICENSE-2.0\n", - "\n", - "Unless required by applicable law or agreed to in writing, software\n", - "distributed under the License is distributed on an \"AS IS\" BASIS,\n", - "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", - "See the License for the specific language governing permissions and\n", - "limitations under the License." - ] - } - ], - "metadata": { - "colab": { - "name": "evaluate_gemini_with_autosxs.ipynb", - "toc_visible": true - }, - "environment": { - "kernel": "conda-base-py", - "name": "workbench-notebooks.m121", - "type": "gcloud", - "uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m121" - }, - "kernelspec": { - "display_name": "Python 3 (ipykernel) (Local)", - "language": "python", - "name": "conda-base-py" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.14" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -} diff --git a/notebooks/vertex_genai/solutions/vertex_llm_evaluation.ipynb b/notebooks/vertex_genai/solutions/vertex_llm_evaluation.ipynb deleted file mode 100644 index 03380510..00000000 --- a/notebooks/vertex_genai/solutions/vertex_llm_evaluation.ipynb +++ /dev/null @@ -1,672 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "JAPoU8Sm5E6e", - "tags": [] - }, - "source": [ - "# Evaluate LLMs with Vertex AutoSxS Model Evaluation" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "d975e698c9a4" - }, - "source": [ - "## Learning Objectives\n", - "\n", - "1) Learn how to create evaluation data\n", - "1) Learn how to setup a AutoSxS model evaluation pipeline\n", - "1) Learn how to run the evaluation pipeline job\n", - "1) Learn how to check the autorater judgments\n", - "1) Learn how to evaluate how much AutoSxS is aligned with human judgment\n", - "\n", - "\n", - "In this notebook, we will use Vertex AI Model Evaluation AutoSxS (pronounced Auto Side-by-Side) to compare two LLMs predictions in a summarization task in order to understand which LLM model did a better job at the summarization task. Provided that we have some additional human judgments as to which model is better for part of the dataset, then we will demonstrate how to evaluate the alignment of AutoSxS with human judgment. (Note that Vertex AI Model Evaluation AutoSxS allows you to compare the performance of Google-first-party and Third-party LLMs, provided the model responses are stored in a JSONL evaluation file.)\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "WReHDGG5g0XY" - }, - "source": [ - "## Setup\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "PyQmSRbKA8r-", - "tags": [] - }, - "outputs": [], - "source": [ - "import datetime\n", - "import os\n", - "import pprint\n", - "\n", - "import pandas as pd\n", - "from google.cloud import aiplatform\n", - "from google.protobuf.json_format import MessageToDict\n", - "from IPython.display import HTML, display" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "oM1iC_MfAts1", - "tags": [] - }, - "outputs": [], - "source": [ - "project_id_list = !gcloud config get-value project 2> /dev/null\n", - "PROJECT_ID = project_id_list[0]\n", - "BUCKET = PROJECT_ID\n", - "REGION = \"us-central1\"\n", - "\n", - "# Evaluation data containing competing models responses\n", - "EVALUATION_FILE_URI = \"gs://cloud-training/specialized-training/llm_eval/sum_eval_gemini_dataset_001.jsonl\"\n", - "HUMAN_EVALUATION_FILE_URI = \"gs://cloud-training/specialized-training/llm_eval/sum_human_eval_gemini_dataset_001.jsonl\"\n", - "\n", - "# AutoSxS Vertex Pipeline template\n", - "TEMPLATE_URI = (\n", - " \"https://us-kfp.pkg.dev/ml-pipeline/llm-rlhf/autosxs-template/default\"\n", - ")\n", - "\n", - "print(f\"Your project ID is set to {PROJECT_ID}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "-EcIXiGsCePi" - }, - "source": [ - "Let us make sure the bucket where AutoSxS will export the data exists, and if not, let us create it:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "NIq7R4HZCfIc", - "tags": [] - }, - "outputs": [], - "source": [ - "! gsutil ls gs://{BUCKET} > /dev/null || gsutil mb -l {REGION} -p {PROJECT_ID} gs://{BUCKET}" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "init_aip:mbsdk,all", - "jp-MarkdownHeadingCollapsed": true, - "tags": [] - }, - "source": [ - "At last, let us initialize the `aiplatform` client in the cell below:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "j4KEcQEWROby", - "tags": [] - }, - "outputs": [], - "source": [ - "aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "qoiqIyiMvc3n" - }, - "source": [ - "## Evaluate LLMs using Vertex AI Model Evaluation AutoSxS\n", - "\n", - "Suppose you've obtained your LLM-generated predictions in a summarization task. To evaluate LLMs such as Gemini-Pro on Vertex AI against another using [AutoSXS](https://cloud.google.com/vertex-ai/generative-ai/docs/models/side-by-side-eval), you need to follow these steps for evaluation:\n", - "\n", - "1. **Prepare the Evaluation Dataset**: Gather your prompts, contexts, generated responses and human preference required for the evaluation.\n", - "\n", - "2. **Convert the Evaluation Dataset:** Convert the dataset into the JSONL format and store it in a Cloud Storage bucket. (Alternatively, you can save the dataset to a BigQuery table.)\n", - "\n", - "3. **Run a Model Evaluation Job:** Use Vertex AI to run a model evaluation job to assess the performance of the LLM.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "08d289fa873f" - }, - "source": [ - "### Dataset\n", - "\n", - "The dataset is a modified sample of the [XSum](https://huggingface.co/datasets/EdinburghNLP/xsum) dataset for evaluation of abstractive single-document summarization systems." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ZEIlO0eHbsQh" - }, - "source": [ - "### Read the evaluation data\n", - "\n", - "In this summarization use case, you use `sum_eval_gemini_dataset_001`, a JSONL-formatted evaluation datasets which contains content-response pairs without human preferences.\n", - "\n", - "In the dataset, each row represents a single example. The dataset includes ID fields, such as \"id\" and \"document,\" which are used to identify each unique example. The \"document\" field contains the newspaper articles to be summarized.\n", - "\n", - "While the dataset does not have [data fields](https://cloud.google.com/vertex-ai/docs/generative-ai/models/side-by-side-eval#prep-eval-dataset) for prompts and contexts, it does include pre-generated predictions. These predictions contain the generated response according to the LLMs task, with \"response_a\" and \"response_b\" representing different article summaries generated by two different LLM models.\n", - "\n", - "**Note: For experimentation, you can provide only a few examples. The documentation recommends at least 400 examples to ensure high-quality aggregate metrics.**\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "R-_ettKRxfxT", - "tags": [] - }, - "outputs": [], - "source": [ - "evaluation_gemini_df = pd.read_json(EVALUATION_FILE_URI, lines=True)\n", - "evaluation_gemini_df.head()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "1lZHraNFkDz8" - }, - "source": [ - "### Run a model evaluation job\n", - "\n", - "AutoSxS relays on Vertex AI pipelines to run model evaluation. And here you can see some of the required pipeline parameters:\n", - "\n", - "* `evaluation_dataset` to indicate where the evaluation dataset location. In this case, it is the JSONL Cloud bucket URI.\n", - "\n", - "* `id_colums` to distinguish evaluation examples that are unique. Here, as you can imagine, your have `id` and `document` fields. These fields will be added in the judgment table generated by AutoSxS.\n", - "\n", - "* `task` to indicate the task type you want to evaluate. It can be `summarization` or `question_answer`. In this case you have `summarization`.\n", - "\n", - "* `autorater_prompt_parameters` to configure the autorater task behavior. You can specify inference instructions to guide task completion, as well as setting the inference context to refer during the task execution. For example, for the summarization task below we have that `autorater_prompt_parameters` is specified by a dictionary containing the name of the field containing the summarization context (i.e. the document to summarize) as well as the summarization instruction itself:\n", - "```python\n", - " {\n", - " \"inference_context\": {\"column\": \"document\"},\n", - " \"inference_instruction\": {\"template\": \"Summarize the following text: \"},\n", - " },\n", - "```\n", - "\n", - "Lastly, you have to provide `response_column_a` and `response_column_b` with the names of columns containing predefined predictions in order to calculate the evaluation metrics. In this case, `response_a` and `response_b` respectively. Note that we can simply specify the actual models through the `model_a` and `model_b` fields (as long as these models are stored in Vertex model registry) instead of providing the pre-generated responses (through the `response_column_a` and `response_column_b` fields). To learn more about all supported parameters and their usage, see the [official documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/models/side-by-side-eval#perform-eval).\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Cp7e-hOmNMhA", - "tags": [] - }, - "outputs": [], - "source": [ - "timestamp = str(datetime.datetime.now().timestamp()).replace(\".\", \"\")\n", - "display_name = f\"autosxs-{timestamp}\"\n", - "pipeline_root = os.path.join(\"gs://\", BUCKET, display_name)\n", - "\n", - "parameters = {\n", - " \"evaluation_dataset\": EVALUATION_FILE_URI,\n", - " \"id_columns\": [\"id\", \"document\"],\n", - " \"task\": \"summarization\",\n", - " \"autorater_prompt_parameters\": {\n", - " \"inference_context\": {\"column\": \"document\"},\n", - " \"inference_instruction\": {\"template\": \"Summarize the following text: \"},\n", - " },\n", - " \"response_column_a\": \"response_a\",\n", - " \"response_column_b\": \"response_b\",\n", - "}" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Bp0YIvSv-zhB" - }, - "source": [ - "After you define the model evaluation parameters, you can run a model evaluation pipeline job using the predifined pipeline template with Vertex AI Python SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "AjFHT5ze9m4L", - "tags": [] - }, - "outputs": [], - "source": [ - "job = aiplatform.PipelineJob(\n", - " job_id=display_name,\n", - " display_name=display_name,\n", - " pipeline_root=pipeline_root,\n", - " template_path=TEMPLATE_URI,\n", - " parameter_values=parameters,\n", - " enable_caching=False,\n", - ")\n", - "job.run(sync=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "PunFdLfqGh0e" - }, - "source": [ - "### Evaluate the results\n", - "\n", - "After the evaluation pipeline successfully run, you can review the evaluation results by looking both at artifacts generated by the pipeline itself in Vertex AI Pipelines UI and in the notebook enviroment using the Vertex AI Python SDK.\n", - "\n", - "AutoSXS produces three types of evaluation results: a judgments table, aggregated metrics, and alignment metrics (if human preferences are provided).\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "J4ShokOI9FDI" - }, - "source": [ - "### AutoSxS Judgments\n", - "\n", - "The judgments table contains metrics that offer insights of LLM performance per each example.\n", - "\n", - "For each response pair, the judgments table includes a `choice` column indicating the better response based on the evaluation criteria used by the autorater.\n", - "\n", - "Each choice has a `confidence score` column between 0 and 1, representing the autorater's level of confidence in the evaluation.\n", - "\n", - "Last but not less important, AutoSXS provides an explanation for why the autorater preferred one response over the other.\n", - "\n", - "Below you have an example of AutoSxS judgments output." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "MakdmpYCmehF", - "tags": [] - }, - "outputs": [], - "source": [ - "online_eval_task = [\n", - " task\n", - " for task in job.task_details\n", - " if task.task_name == \"online-evaluation-pairwise\"\n", - "][0]\n", - "\n", - "\n", - "judgments_uri = MessageToDict(online_eval_task.outputs[\"judgments\"]._pb)[\n", - " \"artifacts\"\n", - "][0][\"uri\"]\n", - "\n", - "judgments_df = pd.read_json(judgments_uri, lines=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The next cell contains a helper function to print a sample from AutoSxS judgments nicely in the notebook." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "_gyM2-i3HHnP", - "tags": [] - }, - "outputs": [], - "source": [ - "def print_autosxs_judgments(df, n=3):\n", - " \"\"\"Print AutoSxS judgments in the notebook\"\"\"\n", - "\n", - " style = \"white-space: pre-wrap; width: 800px; overflow-x: auto;\"\n", - " df = df.sample(n=n)\n", - "\n", - " for index, row in df.iterrows():\n", - " if row[\"confidence\"] >= 0.5:\n", - " display(\n", - " HTML(\n", - " f\"

Document:

{row['document']}
\"\n", - " )\n", - " )\n", - " display(\n", - " HTML(\n", - " f\"

Response A:

{row['response_a']}
\"\n", - " )\n", - " )\n", - " display(\n", - " HTML(\n", - " f\"

Response B:

{row['response_b']}
\"\n", - " )\n", - " )\n", - " display(\n", - " HTML(\n", - " f\"

Explanation:

{row['explanation']}
\"\n", - " )\n", - " )\n", - " display(\n", - " HTML(\n", - " f\"

Confidence score:

{row['confidence']}
\"\n", - " )\n", - " )\n", - " display(HTML(\"
\"))\n", - "\n", - "\n", - "print_autosxs_judgments(judgments_df)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "tJ5PJ9x69KrC" - }, - "source": [ - "### AutoSxS Aggregate metrics\n", - "\n", - "AutoSxS also provides aggregated metrics as an additional evaluation result. These win-rate metrics are calculated by utilizing the judgments table to determine the percentage of times the autorater preferred one model response. \n", - "\n", - "These metrics are relevant for quickly find out which is the best model in the context of the evaluated task.\n", - "\n", - "Below you have an example of AutoSxS Aggregate metrics." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "w2RISjQSJk9R", - "tags": [] - }, - "outputs": [], - "source": [ - "metrics_eval_task = [\n", - " task\n", - " for task in job.task_details\n", - " if task.task_name == \"model-evaluation-text-generation-pairwise\"\n", - "][0]\n", - "\n", - "\n", - "win_rate_metrics = MessageToDict(\n", - " metrics_eval_task.outputs[\"autosxs_metrics\"]._pb\n", - ")[\"artifacts\"][0][\"metadata\"]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The next cell contains a helper function to print AutoSxS aggregated metrics nicely in the notebook." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "def print_aggregated_metrics(scores):\n", - " \"\"\"Print AutoSxS aggregated metrics\"\"\"\n", - "\n", - " score_b = round(win_rate_metrics[\"autosxs_model_b_win_rate\"] * 100)\n", - " display(\n", - " HTML(\n", - " f\"

AutoSxS Autorater prefers {score_b}% of time Model B over Model A

\"\n", - " )\n", - " )\n", - "\n", - "\n", - "print_aggregated_metrics(win_rate_metrics)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "_5mYmHj6poXz" - }, - "source": [ - "### Human-preference alignment metrics\n", - "\n", - "After reviewing the results of your initial AutoSxS evalution, you may wonder about the reliability of the Autorater assessment's alignment with human raters' views.\n", - "\n", - "AutoSxS supports human preference to validate Autorater evaluation.\n", - "\n", - "To check alignment with a human-preference dataset, you need to add the ground truths as a column to the `evaluation_dataset` and pass the column name to `human_preference_column`." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "8vSedvz39-iu" - }, - "source": [ - "#### Read the evaluation data\n", - "\n", - "With respect of evaluation dataset, in this case the `sum_human_eval_gemini_dataset_001` dataset also includes human preferences.\n", - "\n", - "Below you have a sample of the dataset." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "mbfsO2uw9-i5", - "tags": [] - }, - "outputs": [], - "source": [ - "human_evaluation_gemini_df = pd.read_json(HUMAN_EVALUATION_FILE_URI, lines=True)\n", - "human_evaluation_gemini_df.head()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XpmAr6UX-Imb" - }, - "source": [ - "#### Run a model evaluation job\n", - "\n", - "With respect to the AutoSXS pipeline, you must specify the human preference column in the pipeline parameters.\n", - "\n", - "Then, you can run the evaluation pipeline job using the Vertex AI Python SDK as shown below.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "bFmvFt2a3MtN", - "tags": [] - }, - "outputs": [], - "source": [ - "timestamp = str(datetime.datetime.now().timestamp()).replace(\".\", \"\")\n", - "display_name = f\"autosxs-human-eval-{timestamp}\"\n", - "pipeline_root = os.path.join(\"gs://\", BUCKET, display_name)\n", - "\n", - "parameters = {\n", - " \"evaluation_dataset\": HUMAN_EVALUATION_FILE_URI,\n", - " \"id_columns\": [\"id\", \"document\"],\n", - " \"task\": \"summarization\",\n", - " \"autorater_prompt_parameters\": {\n", - " \"inference_context\": {\"column\": \"document\"},\n", - " \"inference_instruction\": {\"template\": \"Summarize the following text: \"},\n", - " },\n", - " \"response_column_a\": \"response_a\",\n", - " \"response_column_b\": \"response_b\",\n", - " \"human_preference_column\": \"actual\",\n", - "}" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "KbhIPY-_3SSB", - "tags": [] - }, - "outputs": [], - "source": [ - "job = aiplatform.PipelineJob(\n", - " job_id=display_name,\n", - " display_name=display_name,\n", - " pipeline_root=pipeline_root,\n", - " template_path=TEMPLATE_URI,\n", - " parameter_values=parameters,\n", - " enable_caching=False,\n", - ")\n", - "job.run(sync=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "QTzJ8BaWEusN" - }, - "source": [ - "### Get human-aligned aggregated metrics\n", - "\n", - "Compared with the aggregated metrics you get before, now the pipeline returns additional measurements that utilize human-preference data provided by you.\n", - "\n", - "Below you have a view of the resulting human-aligned aggregated metrics, comparing the win rates for models for both human preferenced and model inferences.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "JLUOJFjA38ja", - "tags": [] - }, - "outputs": [], - "source": [ - "human_eval_task = [\n", - " task\n", - " for task in job.task_details\n", - " if task.task_name == \"model-evaluation-text-generation-pairwise\"\n", - "][0]\n", - "\n", - "human_aligned_metrics = {\n", - " k: round(v, 3)\n", - " for k, v in MessageToDict(human_eval_task.outputs[\"autosxs_metrics\"]._pb)[\n", - " \"artifacts\"\n", - " ][0][\"metadata\"].items()\n", - " if \"win_rate\" in k\n", - "}" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The next cell contains a helper function to print AutoSxS alignment metrics nicely in the notebook." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "def print_human_preference_metrics(metrics):\n", - " \"\"\"Print AutoSxS Human-preference alignment metrics\"\"\"\n", - " display(\n", - " HTML(\n", - " f\"

AutoSxS Autorater prefers {score_b}% of time Model B over Model A

\"\n", - " )\n", - " )\n", - "\n", - "\n", - "pprint.pprint(human_aligned_metrics)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Acknowledgement \n", - "\n", - "This notebook is adapted from a [tutorial](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluate_gemini_with_autosxs.ipynb)\n", - "written by Ivan Nardini." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright 2024 Google LLC\n", - "\n", - "Licensed under the Apache License, Version 2.0 (the \"License\");\n", - "you may not use this file except in compliance with the License.\n", - "You may obtain a copy of the License at\n", - "\n", - " https://www.apache.org/licenses/LICENSE-2.0\n", - "\n", - "Unless required by applicable law or agreed to in writing, software\n", - "distributed under the License is distributed on an \"AS IS\" BASIS,\n", - "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", - "See the License for the specific language governing permissions and\n", - "limitations under the License." - ] - } - ], - "metadata": { - "colab": { - "name": "evaluate_gemini_with_autosxs.ipynb", - "toc_visible": true - }, - "environment": { - "kernel": "conda-base-py", - "name": "workbench-notebooks.m121", - "type": "gcloud", - "uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m121" - }, - "kernelspec": { - "display_name": "Python 3 (ipykernel) (Local)", - "language": "python", - "name": "conda-base-py" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.14" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -}