diff --git a/config/_default/menus/main.en.yaml b/config/_default/menus/main.en.yaml index 57b29141d98..4f259ace123 100644 --- a/config/_default/menus/main.en.yaml +++ b/config/_default/menus/main.en.yaml @@ -8877,26 +8877,31 @@ menu: parent: digital_experience_heading identifier: pa_experiments weight: 50000 + - name: Plan and Launch Experiments + url: experiments/plan_and_launch_experiments + parent: pa_experiments + identifier: pa_experiments_plan + weight: 1 - name: Define Metrics url: experiments/defining_metrics parent: pa_experiments identifier: pa_experiments_metrics - weight: 1 - - name: Reading Experiment Results + weight: 2 + - name: Read Experiment Results url: experiments/reading_results parent: pa_experiments identifier: pa_experiments_results - weight: 2 + weight: 3 - name: Minimum Detectable Effects url: experiments/minimum_detectable_effect parent: pa_experiments identifier: pa_experiments_mde - weight: 3 + weight: 4 - name: Troubleshooting url: experiments/troubleshooting parent: pa_experiments identifier: pa_experiments_troubleshooting - weight: 4 + weight: 5 - name: Guides url: product_analytics/guide/ parent: product_analytics diff --git a/content/en/experiments/_index.md b/content/en/experiments/_index.md index c189b71660b..8d7bd6854bb 100644 --- a/content/en/experiments/_index.md +++ b/content/en/experiments/_index.md @@ -1,73 +1,53 @@ --- -title: Planning and Launching Experiments -description: Use Datadog Experiments to measure the causal relationship that new experiences or features have on user outcomes. -aliases: - - /product_analytics/experimentation/ +title: Experiments +description: Plan, run, and analyze randomized experiments across your stack with Datadog Experiments. further_reading: -- link: "https://www.datadoghq.com/blog/datadog-product-analytics" - tag: "Blog" - text: "Make data-driven design decisions with Product Analytics" -- link: "/experiments/defining_metrics" +- link: "/feature_flags/" tag: "Documentation" - text: "Defining Experiment Metrics" + text: "Feature Flags" +- link: "/product_analytics/" + tag: "Documentation" + text: "Product Analytics" --- {{< callout url="https://www.datadoghq.com/product-preview/datadog-experiments/" >}} Datadog Experiments is in Preview. Complete the form to request access. {{< /callout >}} -## Overview -Datadog Experiments allows you to measure the causal relationship that new experiences and features have on user outcomes. Datadog Experiments uses [Feature Flags][4] to randomly allocate traffic between two or more variations, using one of the variations as a control group. - -This page walks you through planning and launching your experiments. - -## Setup -To create, configure, and launch your experiment, complete the following steps: - -### Step 1 - Create your experiment - -1. Navigate to the [Experiments][1] page in Datadog Product Analytics. -2. Click **+ Create Experiment**. -3. Enter your experiment name and hypothesis. - -{{< img src="/product_analytics/experiment/exp_create_experiment.png" alt="The experiment creation form with fields for experiment name and hypothesis." style="width:80%;" >}} - -### Step 2 - Add metrics - -After you’ve created an experiment, add your primary metric and optional guardrails. See [Defining Metrics][2] for details on how to create metrics. - -{{< img src="/product_analytics/experiment/exp_decision_metrics1.png" alt="The metrics configuration panel with options for primary metric and guardrails." style="width:80%;" >}} - -#### Add a sample size calculation (optional) - -After selecting your experiment’s metrics, use the optional sample size calculator to determine how small of a change your experiment can reliably detect with your current sample size. +## Overview -1. Select the **Entrypoint Event** of your experiment. This specifies _when_ in the user journey they will be enrolled into the test. -1. Click **Run calculation** to see the [Minimum Detectable Effects][3] (MDE) your experiment has on your metrics. The MDE is the smallest difference that you are able to detect between your experiment’s variants. +Datadog Experiments helps teams run and analyze randomized experiments, such as A/B tests. These experiments help you understand how new features affect business outcomes, user behavior, and application performance, so you can make confident, data-backed decisions about what to implement. -{{< img src="/product_analytics/experiment/exp_sample_size.png" alt="The Sample Size Calculator modal with the Entrypoint Event dropdown highlighted." style="width:90%;" >}} +Datadog Experiments consists of two components: -### Step 3 - Launch your experiment +- An integration with [Datadog Feature Flags][1] for deploying and managing randomized experiments. +- A statistical analysis of [Real User Monitoring (RUM)][2] and [Product Analytics][3] data to evaluate experiment results. -After specifying your metrics, you can launch your experiment. +## Getting started -1. Select a Feature Flag that captures the variants you want to test. If you have not yet created a feature flag, see the [Getting Started with Feature Flags][4] page. +To start using Datadog Experiments, configure at least one of the following data sources: -1. Click **Set Up Experiment on Feature Flag** to specify how you want to roll out your experiment. You can either launch the experiment to all traffic, or schedule a gradual rollout. +- [Real User Monitoring (RUM)][2] for client-side and performance signals. +- [Product Analytics][3] for user behavior and journey metrics. -{{< img src="/product_analytics/experiment/exp_feature_flag.png" alt="Set up an experiment on a Feature Flag." style="width:90%;" >}} +After configuring a data source, follow these steps to launch your experiment: -## Next steps -1. **[Defining metrics][2]**: Define the metrics you want to measure during your experiments. -1. **[Reading Experiment Results][5]**: Review and explore your experiment results. -1. **[Minimum Detectable Effects][3]**: Choose appropriately sized MDEs. +1. **[Create a metric][4]** to evaluate your experiment. +1. **[Create an experiment][5]** to define your hypothesis and optionally calculate a [sample size][8]. +1. **[Create a feature flag][6]** and implement it using the [SDK][9] to assign users to the control and variant groups. A feature flag is required to launch your experiment. +1. **[Launch your experiment][7]** to see the impact of your change on business outcomes, user journey, and application performance. +{{< img src="/product_analytics/experiment/overview_metrics_view-1.png" alt="The Experiments metrics view showing business, funnel, and performance metrics with control and variant values and relative lift for each metric. A tooltip is open on the Revenue metric showing Non-CUPED values for Revenue per User, Total Revenue, and User Assignment Count across the control and variant groups." style="width:90%;" >}} ## Further reading {{< partial name="whats-next/whats-next.html" >}} -[1]: https://app.datadoghq.com/product-analytics/experiments -[2]: /experiments/defining_metrics -[3]: /experiments/minimum_detectable_effect -[4]: /getting_started/feature_flags/ -[5]: /experiments/reading_results +[1]: /feature_flags/ +[2]: /real_user_monitoring/ +[3]: /product_analytics/#getting-started +[4]: /experiments/defining_metrics +[5]: /experiments/plan_and_launch_experiments +[6]: /getting_started/feature_flags/#create-your-first-feature-flag +[7]: /experiments/plan_and_launch_experiments#step-3---launch-your-experiment +[8]: /experiments/plan_and_launch_experiments#add-a-sample-size-calculation-optional +[9]: /getting_started/feature_flags/#feature-flags-sdks diff --git a/content/en/experiments/plan_and_launch_experiments.md b/content/en/experiments/plan_and_launch_experiments.md new file mode 100644 index 00000000000..57840e00c1d --- /dev/null +++ b/content/en/experiments/plan_and_launch_experiments.md @@ -0,0 +1,73 @@ +--- +title: Plan and Launch Experiments +description: Use Datadog Experiments to measure the causal relationship that new experiences or features have on user outcomes. +aliases: + - /product_analytics/experimentation/ +further_reading: +- link: "https://www.datadoghq.com/blog/datadog-product-analytics" + tag: "Blog" + text: "Make data-driven design decisions with Product Analytics" +- link: "/experiments/defining_metrics" + tag: "Documentation" + text: "Defining Experiment Metrics" +--- + +{{< callout url="https://www.datadoghq.com/product-preview/datadog-experiments/" >}} +Datadog Experiments is in Preview. Complete the form to request access. +{{< /callout >}} + +## Overview +Use Datadog Experiments to measure the causal relationship that new experiences and features have on user outcomes. Datadog Experiments uses [Feature Flags][4] to randomly allocate traffic between two or more variations, using one of the variations as a control group. + +This page walks you through planning and launching your experiments. + +## Setup +To create, configure, and launch your experiment, complete the following steps: + +### Step 1 - Create your experiment + +1. Navigate to the [Experiments][1] page in Datadog Product Analytics. +2. Click **+ Create Experiment**. +3. Enter your experiment name and hypothesis. + +{{< img src="/product_analytics/experiment/exp_create_experiment.png" alt="The experiment creation form with fields for experiment name and hypothesis." style="width:80%;" >}} + +### Step 2 - Add metrics + +After you have created an experiment, add your primary metric and optional guardrails. See [Defining Metrics][2] for details on how to create metrics. + +{{< img src="/product_analytics/experiment/exp_decision_metrics1.png" alt="The metrics configuration panel with options for primary metric and guardrails." style="width:80%;" >}} + +#### Add a sample size calculation (optional) + +After selecting your experiment’s metrics, use the optional sample size calculator to determine how small of a change your experiment can reliably detect with your current sample size. + +1. Select the **Entrypoint Event** of your experiment. This specifies _when_ in the user journey they will be enrolled into the test. +1. Click **Run calculation** to see the [Minimum Detectable Effects][3] (MDE) your experiment has on your metrics. The MDE is the smallest difference you can detect between your experiment’s variants. + +{{< img src="/product_analytics/experiment/exp_sample_size.png" alt="The Sample Size Calculator modal with the Entrypoint Event dropdown highlighted." style="width:90%;" >}} + +### Step 3 - Launch your experiment + +After specifying your metrics, you can launch your experiment. + +1. Select a feature flag that captures the variants you want to test. If you have not yet created a feature flag, see the [Getting Started with Feature Flags][4] page. + +1. Click **Set Up Experiment on Feature Flag** to specify how you want to roll out your experiment. You can either launch the experiment to all traffic, or schedule a gradual rollout. + +{{< img src="/product_analytics/experiment/exp_feature_flag.png" alt="Set up an experiment on a Feature Flag." style="width:90%;" >}} + +## Next steps +1. **[Defining metrics][2]**: Define the metrics you want to measure during your experiments. +1. **[Reading Experiment Results][5]**: Review and explore your experiment results. +1. **[Minimum Detectable Effects][3]**: Choose appropriately sized MDEs. + + +## Further reading +{{< partial name="whats-next/whats-next.html" >}} + +[1]: https://app.datadoghq.com/product-analytics/experiments +[2]: /experiments/defining_metrics +[3]: /experiments/minimum_detectable_effect +[4]: /getting_started/feature_flags/ +[5]: /experiments/reading_results diff --git a/static/images/product_analytics/experiment/overview_metrics_view-1.png b/static/images/product_analytics/experiment/overview_metrics_view-1.png new file mode 100644 index 00000000000..834dd714bd5 Binary files /dev/null and b/static/images/product_analytics/experiment/overview_metrics_view-1.png differ