-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[DOCS-13696] Add new Experiments landing page #35419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
iadjivon
wants to merge
16
commits into
master
Choose a base branch
from
ida.adjivon/DOCS-13696-new-main-page-exp-ga
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
16 commits
Select commit
Hold shift + click to select a range
cdd5296
renamed the index to planning and launching experiments, changed the …
iadjivon 69704ec
added a draft for the overview page.
iadjivon fda2a2e
quick changes
iadjivon 81b3c57
changes to the overview page.
iadjivon f8d53d3
changes to the overview and content
iadjivon 58aab32
modified the menu file
iadjivon d7a5cfe
changed the image
iadjivon 266e478
modified the list of impacts based on the image
iadjivon cdb8c0d
Polish overview copy and update alt text
iadjivon 1c1ca7f
Tighten statistical analysis bullet on overview
iadjivon 8941b26
Clarify Feature Flags relationship in components list
iadjivon af08736
Fix parallel structure in components list
iadjivon eeb28f4
removed railing whitespaces, removed RUM to avoid any confusion. Kept…
iadjivon b936839
added the preview callout to the landing page. this will be removed o…
iadjivon 88a2b8e
final changes to the overview doc
iadjivon d4b38a1
quick consistency change
iadjivon File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,73 +1,53 @@ | ||
| --- | ||
| title: Planning and Launching Experiments | ||
| description: Use Datadog Experiments to measure the causal relationship that new experiences or features have on user outcomes. | ||
| aliases: | ||
| - /product_analytics/experimentation/ | ||
| title: Experiments | ||
| description: Plan, run, and analyze randomized experiments across your stack with Datadog Experiments. | ||
| further_reading: | ||
| - link: "https://www.datadoghq.com/blog/datadog-product-analytics" | ||
| tag: "Blog" | ||
| text: "Make data-driven design decisions with Product Analytics" | ||
| - link: "/experiments/defining_metrics" | ||
| - link: "/feature_flags/" | ||
| tag: "Documentation" | ||
| text: "Defining Experiment Metrics" | ||
| text: "Feature Flags" | ||
| - link: "/product_analytics/" | ||
| tag: "Documentation" | ||
| text: "Product Analytics" | ||
| --- | ||
|
|
||
| {{< callout url="https://www.datadoghq.com/product-preview/datadog-experiments/" >}} | ||
| Datadog Experiments is in Preview. Complete the form to request access. | ||
| {{< /callout >}} | ||
|
|
||
| ## Overview | ||
| Datadog Experiments allows you to measure the causal relationship that new experiences and features have on user outcomes. Datadog Experiments uses [Feature Flags][4] to randomly allocate traffic between two or more variations, using one of the variations as a control group. | ||
|
|
||
| This page walks you through planning and launching your experiments. | ||
|
|
||
| ## Setup | ||
| To create, configure, and launch your experiment, complete the following steps: | ||
|
|
||
| ### Step 1 - Create your experiment | ||
|
|
||
| 1. Navigate to the [Experiments][1] page in Datadog Product Analytics. | ||
| 2. Click **+ Create Experiment**. | ||
| 3. Enter your experiment name and hypothesis. | ||
|
|
||
| {{< img src="/product_analytics/experiment/exp_create_experiment.png" alt="The experiment creation form with fields for experiment name and hypothesis." style="width:80%;" >}} | ||
|
|
||
| ### Step 2 - Add metrics | ||
|
|
||
| After you’ve created an experiment, add your primary metric and optional guardrails. See [Defining Metrics][2] for details on how to create metrics. | ||
|
|
||
| {{< img src="/product_analytics/experiment/exp_decision_metrics1.png" alt="The metrics configuration panel with options for primary metric and guardrails." style="width:80%;" >}} | ||
|
|
||
| #### Add a sample size calculation (optional) | ||
|
|
||
| After selecting your experiment’s metrics, use the optional sample size calculator to determine how small of a change your experiment can reliably detect with your current sample size. | ||
| ## Overview | ||
|
|
||
| 1. Select the **Entrypoint Event** of your experiment. This specifies _when_ in the user journey they will be enrolled into the test. | ||
| 1. Click **Run calculation** to see the [Minimum Detectable Effects][3] (MDE) your experiment has on your metrics. The MDE is the smallest difference that you are able to detect between your experiment’s variants. | ||
| Datadog Experiments helps teams run and analyze randomized experiments, such as A/B tests. These experiments help you understand how new features affect business outcomes, user behavior, and application performance, so you can make confident, data-backed decisions about what to implement. | ||
|
|
||
| {{< img src="/product_analytics/experiment/exp_sample_size.png" alt="The Sample Size Calculator modal with the Entrypoint Event dropdown highlighted." style="width:90%;" >}} | ||
| Datadog Experiments consists of two components: | ||
|
|
||
| ### Step 3 - Launch your experiment | ||
| - An integration with [Datadog Feature Flags][1] for deploying and managing randomized experiments. | ||
| - A statistical analysis of [Real User Monitoring (RUM)][2] and [Product Analytics][3] data to evaluate experiment results. | ||
iadjivon marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| After specifying your metrics, you can launch your experiment. | ||
| ## Getting started | ||
|
|
||
| 1. Select a Feature Flag that captures the variants you want to test. If you have not yet created a feature flag, see the [Getting Started with Feature Flags][4] page. | ||
| To start using Datadog Experiments, configure at least one of the following data sources: | ||
|
|
||
| 1. Click **Set Up Experiment on Feature Flag** to specify how you want to roll out your experiment. You can either launch the experiment to all traffic, or schedule a gradual rollout. | ||
| - [Real User Monitoring (RUM)][2] for client-side and performance signals. | ||
| - [Product Analytics][3] for user behavior and journey metrics. | ||
iadjivon marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| {{< img src="/product_analytics/experiment/exp_feature_flag.png" alt="Set up an experiment on a Feature Flag." style="width:90%;" >}} | ||
| After configuring a data source, follow these steps to launch your experiment: | ||
|
|
||
| ## Next steps | ||
| 1. **[Defining metrics][2]**: Define the metrics you want to measure during your experiments. | ||
| 1. **[Reading Experiment Results][5]**: Review and explore your experiment results. | ||
| 1. **[Minimum Detectable Effects][3]**: Choose appropriately sized MDEs. | ||
| 1. **[Create a metric][4]** to evaluate your experiment. | ||
| 1. **[Create an experiment][5]** to define your hypothesis and optionally calculate a [sample size][8]. | ||
| 1. **[Create a feature flag][6]** and implement it using the [SDK][9] to assign users to the control and variant groups. A feature flag is required to launch your experiment. | ||
| 1. **[Launch your experiment][7]** to see the impact of your change on business outcomes, user journey, and application performance. | ||
|
|
||
| {{< img src="/product_analytics/experiment/overview_metrics_view-1.png" alt="The Experiments metrics view showing business, funnel, and performance metrics with control and variant values and relative lift for each metric. A tooltip is open on the Revenue metric showing Non-CUPED values for Revenue per User, Total Revenue, and User Assignment Count across the control and variant groups." style="width:90%;" >}} | ||
|
|
||
| ## Further reading | ||
| {{< partial name="whats-next/whats-next.html" >}} | ||
|
|
||
| [1]: https://app.datadoghq.com/product-analytics/experiments | ||
| [2]: /experiments/defining_metrics | ||
| [3]: /experiments/minimum_detectable_effect | ||
| [4]: /getting_started/feature_flags/ | ||
| [5]: /experiments/reading_results | ||
| [1]: /feature_flags/ | ||
| [2]: /real_user_monitoring/ | ||
| [3]: /product_analytics/#getting-started | ||
| [4]: /experiments/defining_metrics | ||
| [5]: /experiments/plan_and_launch_experiments | ||
| [6]: /getting_started/feature_flags/#create-your-first-feature-flag | ||
| [7]: /experiments/plan_and_launch_experiments#step-3---launch-your-experiment | ||
| [8]: /experiments/plan_and_launch_experiments#add-a-sample-size-calculation-optional | ||
| [9]: /getting_started/feature_flags/#feature-flags-sdks | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,73 @@ | ||
| --- | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. NOTE: This page is not new. It is moved from The focus of this PR is the |
||
| title: Plan and Launch Experiments | ||
| description: Use Datadog Experiments to measure the causal relationship that new experiences or features have on user outcomes. | ||
| aliases: | ||
| - /product_analytics/experimentation/ | ||
| further_reading: | ||
| - link: "https://www.datadoghq.com/blog/datadog-product-analytics" | ||
| tag: "Blog" | ||
| text: "Make data-driven design decisions with Product Analytics" | ||
| - link: "/experiments/defining_metrics" | ||
| tag: "Documentation" | ||
| text: "Defining Experiment Metrics" | ||
| --- | ||
|
|
||
| {{< callout url="https://www.datadoghq.com/product-preview/datadog-experiments/" >}} | ||
| Datadog Experiments is in Preview. Complete the form to request access. | ||
| {{< /callout >}} | ||
|
|
||
| ## Overview | ||
| Use Datadog Experiments to measure the causal relationship that new experiences and features have on user outcomes. Datadog Experiments uses [Feature Flags][4] to randomly allocate traffic between two or more variations, using one of the variations as a control group. | ||
|
|
||
| This page walks you through planning and launching your experiments. | ||
|
|
||
| ## Setup | ||
| To create, configure, and launch your experiment, complete the following steps: | ||
|
|
||
| ### Step 1 - Create your experiment | ||
|
|
||
| 1. Navigate to the [Experiments][1] page in Datadog Product Analytics. | ||
| 2. Click **+ Create Experiment**. | ||
| 3. Enter your experiment name and hypothesis. | ||
|
|
||
| {{< img src="/product_analytics/experiment/exp_create_experiment.png" alt="The experiment creation form with fields for experiment name and hypothesis." style="width:80%;" >}} | ||
|
|
||
| ### Step 2 - Add metrics | ||
|
|
||
| After you have created an experiment, add your primary metric and optional guardrails. See [Defining Metrics][2] for details on how to create metrics. | ||
|
|
||
| {{< img src="/product_analytics/experiment/exp_decision_metrics1.png" alt="The metrics configuration panel with options for primary metric and guardrails." style="width:80%;" >}} | ||
|
|
||
| #### Add a sample size calculation (optional) | ||
|
|
||
| After selecting your experiment’s metrics, use the optional sample size calculator to determine how small of a change your experiment can reliably detect with your current sample size. | ||
|
|
||
| 1. Select the **Entrypoint Event** of your experiment. This specifies _when_ in the user journey they will be enrolled into the test. | ||
| 1. Click **Run calculation** to see the [Minimum Detectable Effects][3] (MDE) your experiment has on your metrics. The MDE is the smallest difference you can detect between your experiment’s variants. | ||
|
|
||
| {{< img src="/product_analytics/experiment/exp_sample_size.png" alt="The Sample Size Calculator modal with the Entrypoint Event dropdown highlighted." style="width:90%;" >}} | ||
|
|
||
| ### Step 3 - Launch your experiment | ||
|
|
||
| After specifying your metrics, you can launch your experiment. | ||
|
|
||
| 1. Select a feature flag that captures the variants you want to test. If you have not yet created a feature flag, see the [Getting Started with Feature Flags][4] page. | ||
|
|
||
| 1. Click **Set Up Experiment on Feature Flag** to specify how you want to roll out your experiment. You can either launch the experiment to all traffic, or schedule a gradual rollout. | ||
|
|
||
| {{< img src="/product_analytics/experiment/exp_feature_flag.png" alt="Set up an experiment on a Feature Flag." style="width:90%;" >}} | ||
|
|
||
| ## Next steps | ||
| 1. **[Defining metrics][2]**: Define the metrics you want to measure during your experiments. | ||
| 1. **[Reading Experiment Results][5]**: Review and explore your experiment results. | ||
| 1. **[Minimum Detectable Effects][3]**: Choose appropriately sized MDEs. | ||
|
|
||
|
|
||
| ## Further reading | ||
| {{< partial name="whats-next/whats-next.html" >}} | ||
|
|
||
| [1]: https://app.datadoghq.com/product-analytics/experiments | ||
| [2]: /experiments/defining_metrics | ||
| [3]: /experiments/minimum_detectable_effect | ||
| [4]: /getting_started/feature_flags/ | ||
| [5]: /experiments/reading_results | ||
Binary file added
BIN
+125 KB
static/images/product_analytics/experiment/overview_metrics_view-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NOTE: This page ignores the mention of warehouse-native data as those content are not yet ready.
I will open another PR to include that content after the warehouse-native docs are ready and can be linked.