You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add structure for remaining guides content (#26423)
## Summary & Motivation
Reorganize 'guides' and 'getting started' content (['Docs'
section](https://docs-preview.dagster.io/) of docs) to prepare for
remaining content.
No need for a line-level review on this one; we just need to make sure
the tests are green (except Vale—that's a bigger problem), and that
staging loads and looks basically fine.
## How I Tested These Changes
Local build
## Changelog
> Insert changelog entry or delete this section.
---------
Signed-off-by: nikki everett <[email protected]>
Copy file name to clipboardExpand all lines: docs/docs-beta/docs/dagster-plus/deployment/deployment-types/serverless/security.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,11 +22,11 @@ The default I/O manager cannot be used if you are a Serverless user who:
22
22
- Are otherwise working with data subject to GDPR or other such regulations
23
23
:::
24
24
25
-
In Serverless, code that uses the default [I/O manager](/guides/build/configure/io-managers) is automatically adjusted to save data in Dagster+ managed storage. This automatic change is useful because the Serverless filesystem is ephemeral, which means the default I/O manager wouldn't work as expected.
25
+
In Serverless, code that uses the default [I/O manager](/guides/operate/io-managers) is automatically adjusted to save data in Dagster+ managed storage. This automatic change is useful because the Serverless filesystem is ephemeral, which means the default I/O manager wouldn't work as expected.
26
26
27
27
However, this automatic change also means potentially sensitive data could be **stored** and not just processed or orchestrated by Dagster+.
28
28
29
-
To prevent this, you can use [another I/O manager](/guides/build/configure/io-managers#built-in) that stores data in your infrastructure or [adapt your code to avoid using an I/O manager](/guides/build/configure/io-managers#before-you-begin).
29
+
To prevent this, you can use [another I/O manager](/guides/operate/io-managers#built-in) that stores data in your infrastructure or [adapt your code to avoid using an I/O manager](/guides/operate/io-managers#before-you-begin).
30
30
31
31
:::note
32
32
You must have [boto3](https://pypi.org/project/boto3/) or `dagster-cloud[serverless]` installed as a project dependency otherwise the Dagster+ managed storage can fail and silently fall back to using the default I/O manager.
Copy file name to clipboardExpand all lines: docs/docs-beta/docs/dagster-plus/features/authentication-and-access-control/rbac/user-roles-permissions.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -115,7 +115,7 @@ TODO: add picture previously at "/images/dagster-cloud/user-token-management/cod
Copy file name to clipboardExpand all lines: docs/docs-beta/docs/dagster-plus/features/authentication-and-access-control/scim/okta-scim.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ In this guide, we'll walk you through configuring [Okta SCIM provisioning](https
18
18
With Dagster+'s Okta SCIM provisioning feature, you can:
19
19
20
20
-**Create users**. Users that are assigned to the Dagster+ application in the IdP will be automatically added to your Dagster+ organization.
21
-
-**Update user attributes.** Updating a user’s name or email address in the IdP will automatically sync the change to your user list in Dagster+.
21
+
-**Update user attributes.** Updating a user's name or email address in the IdP will automatically sync the change to your user list in Dagster+.
22
22
-**Remove users.** Deactivating or unassigning a user from the Dagster+ application in the IdP will remove them from the Dagster+ organization
23
23
{/* - **Push user groups.** Groups and their members in the IdP can be pushed to Dagster+ as [Teams](/dagster-plus/account/managing-users/managing-teams). */}
24
24
-**Push user groups.** Groups and their members in the IdP can be pushed to Dagster+ as
Copy file name to clipboardExpand all lines: docs/docs-beta/docs/dagster-plus/features/catalog-views.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ In this guide, you'll learn how to create, access, and share catalog views with
17
17
<summary>Prerequisites</summary>
18
18
19
19
-**Organization Admin**, **Admin**, or **Editor** permissions on Dagster+
20
-
- Familiarity with [Assets](/guides/build/assets-concepts/index.mdx and [Asset metadata](/guides/build/create-a-pipeline/metadata)
20
+
- Familiarity with [Assets](/guides/build/create-asset-pipelines/assets-concepts/index.mdx and [Asset metadata](/guides/build/create-asset-pipelines/metadata)
Copy file name to clipboardExpand all lines: docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/change-tracking.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ unlisted: true
8
8
This guide is applicable to Dagster+.
9
9
:::
10
10
11
-
Branch Deployments Change Tracking makes it eaiser for you and your team to identify how changes in a pull request will impact data assets. By the end of this guide, you'll understand how Change Tracking works and what types of asset changes can be detected.
11
+
Branch Deployments Change Tracking makes it easier for you and your team to identify how changes in a pull request will impact data assets. By the end of this guide, you'll understand how Change Tracking works and what types of asset changes can be detected.
Copy file name to clipboardExpand all lines: docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/testing.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,14 +8,14 @@ unlisted: true
8
8
This guide is applicable to Dagster+.
9
9
:::
10
10
11
-
This guide details a workflow to test Dagster code in your cloud environment without impacting your production data. To highlight this functionality, we’ll leverage Dagster+ branch deployments and a Snowflake database to:
11
+
This guide details a workflow to test Dagster code in your cloud environment without impacting your production data. To highlight this functionality, we'll leverage Dagster+ branch deployments and a Snowflake database to:
12
12
13
13
- Execute code on a feature branch directly on Dagster+
14
14
- Read and write to a unique per-branch clone of our Snowflake data
15
15
16
16
With these tools, we can merge changes with confidence in the impact on our data platform and with the assurance that our code will execute as intended.
17
17
18
-
Here’s an overview of the main concepts we’ll be using:
18
+
Here’s an overview of the main concepts we'll be using:
19
19
20
20
{/* - [Assets](/concepts/assets/software-defined-assets) - We'll define three assets that each persist a table to Snowflake. */}
21
21
-[Assets](/todo) - We'll define three assets that each persist a table to Snowflake.
@@ -35,7 +35,7 @@ Here’s an overview of the main concepts we’ll be using:
35
35
## Prerequisites
36
36
37
37
:::note
38
-
This guide is an extension of the <ahref="/guides/dagster/transitioning-data-pipelines-from-development-to-production"> Transitioning data pipelines from development to production </a> guide, illustrating a workflow for staging deployments. We’ll use the examples from this guide to build a workflow atop Dagster+’s branch deployment feature.
38
+
This guide is an extension of the <ahref="/guides/dagster/transitioning-data-pipelines-from-development-to-production"> Transitioning data pipelines from development to production </a> guide, illustrating a workflow for staging deployments. We'll use the examples from this guide to build a workflow atop Dagster+’s branch deployment feature.
39
39
:::
40
40
41
41
To complete the steps in this guide, you'll need:
@@ -52,7 +52,7 @@ To complete the steps in this guide, you'll need:
52
52
53
53
## Overview
54
54
55
-
We have a `PRODUCTION` Snowflake database with a schema named `HACKER_NEWS`. In our production cloud environment, we’d like to write tables to Snowflake containing subsets of Hacker News data. These tables will be:
55
+
We have a `PRODUCTION` Snowflake database with a schema named `HACKER_NEWS`. In our production cloud environment, we'd like to write tables to Snowflake containing subsets of Hacker News data. These tables will be:
56
56
57
57
-`ITEMS` - A table containing the entire dataset
58
58
-`COMMENTS` - A table containing data about comments
@@ -128,14 +128,14 @@ As you can see, our assets use an [I/O manager](/todo) named `snowflake_io_manag
128
128
129
129
## Step 2: Configure our assets for each environment
130
130
131
-
At runtime, we’d like to determine which environment our code is running in: branch deployment, or production. This information dictates how our code should execute, specifically with which credentials and with which database.
131
+
At runtime, we'd like to determine which environment our code is running in: branch deployment, or production. This information dictates how our code should execute, specifically with which credentials and with which database.
132
132
133
-
To ensure we can't accidentally write to production from within our branch deployment, we’ll use a different set of credentials from production and write to our database clone.
133
+
To ensure we can't accidentally write to production from within our branch deployment, we'll use a different set of credentials from production and write to our database clone.
134
134
135
135
{/* Dagster automatically sets certain [environment variables](/dagster-plus/managing-deployments/reserved-environment-variables) containing deployment metadata, allowing us to read these environment variables to discern between deployments. We can access the `DAGSTER_CLOUD_IS_BRANCH_DEPLOYMENT` environment variable to determine the currently executing environment. */}
136
136
Dagster automatically sets certain [environment variables](/todo) containing deployment metadata, allowing us to read these environment variables to discern between deployments. We can access the `DAGSTER_CLOUD_IS_BRANCH_DEPLOYMENT` environment variable to determine the currently executing environment.
137
137
138
-
Because we want to configure our assets to write to Snowflake using a different set of credentials and database in each environment, we’ll configure a separate I/O manager for each environment:
138
+
Because we want to configure our assets to write to Snowflake using a different set of credentials and database in each environment, we'll configure a separate I/O manager for each environment:
We’ve defined `drop_database_clone` and `clone_production_database` to utilize the <PyObjectobject="SnowflakeResource"module="dagster_snowflake" />. The Snowflake resource will use the same configuration as the Snowflake I/O manager to generate a connection to Snowflake. However, while our I/O manager writes outputs to Snowflake, the Snowflake resource executes queries against Snowflake.
235
+
We've defined `drop_database_clone` and `clone_production_database` to utilize the <PyObjectobject="SnowflakeResource"module="dagster_snowflake" />. The Snowflake resource will use the same configuration as the Snowflake I/O manager to generate a connection to Snowflake. However, while our I/O manager writes outputs to Snowflake, the Snowflake resource executes queries against Snowflake.
236
236
237
237
We now need to define resources that configure our jobs to the current environment. We can modify the resource mapping by environment as follows:
238
238
@@ -322,7 +322,7 @@ Opening a pull request for our current branch will automatically kick off a bran
322
322
323
323
Alternatively, the logs for the branch deployment workflow can be found in the **Actions** tab on the GitHub pull request.
324
324
325
-
We can also view our database in Snowflake to confirm that a clone exists for each branch deployment. When we materialize our assets within our branch deployment, we’ll now be writing to our clone of `PRODUCTION`. Within Snowflake, we can run queries against this clone to confirm the validity of our data:
325
+
We can also view our database in Snowflake to confirm that a clone exists for each branch deployment. When we materialize our assets within our branch deployment, we'll now be writing to our clone of `PRODUCTION`. Within Snowflake, we can run queries against this clone to confirm the validity of our data:
We can also view our database in Snowflake to confirm that a clone exists for each branch deployment. When we materialize our assets within our branch deployment, we’ll now be writing to our clone of `PRODUCTION`. Within Snowflake, we can run queries against this clone to confirm the validity of our data:
386
+
We can also view our database in Snowflake to confirm that a clone exists for each branch deployment. When we materialize our assets within our branch deployment, we'll now be writing to our clone of `PRODUCTION`. Within Snowflake, we can run queries against this clone to confirm the validity of our data:
After merging our branch, viewing our Snowflake database will confirm that our branch deployment step has successfully deleted our database clone.
491
491
492
-
We’ve now built an elegant workflow that enables future branch deployments to automatically have access to their own clones of our production database that are cleaned up upon merge!
492
+
We've now built an elegant workflow that enables future branch deployments to automatically have access to their own clones of our production database that are cleaned up upon merge!
Copy file name to clipboardExpand all lines: docs/docs-beta/docs/dagster-plus/index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ Dagster+ is a managed orchestration platform built on top of Dagster's open sour
7
7
8
8
Dagster+ is built to be the most performant, reliable, and cost effective way for data engineering teams to run Dagster in production. Dagster+ is also great for students, researchers, or individuals who want to explore Dagster with minimal overhead.
9
9
10
-
Dagster+ comes in two flavors: a fully [Serverless](/dagster-plus/deployment/deployment-types/serverless) offering and a [Hybrid](/dagster-plus/deployment/deployment-types/hybrid) offering. In both cases, Dagster+ does the hard work of managing your data orchestration control plane. Compared to a [Dagster open source deployment](/guides/), Dagster+ manages:
10
+
Dagster+ comes in two flavors: a fully [Serverless](/dagster-plus/deployment/deployment-types/serverless) offering and a [Hybrid](/dagster-plus/deployment/deployment-types/hybrid) offering. In both cases, Dagster+ does the hard work of managing your data orchestration control plane. Compared to a [Dagster open source deployment](guides/deploy/index.md), Dagster+ manages:
11
11
12
12
- Dagster's web UI at https://dagster.plus
13
13
- Metadata stores for data cataloging and cost insights
0 commit comments