diff --git a/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aur-pgsql.md b/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aur-pgsql.md deleted file mode 100644 index b4017b661..000000000 --- a/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aur-pgsql.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -Title: Prepare AWS Aurora and PostgreSQL for RDI -aliases: /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/my-sql-mariadb/ -alwaysopen: false -categories: -- docs -- integrate -- rs -- rdi -description: Prepare AWS Aurora/PostgreSQL databases to work with RDI -group: di -linkTitle: Prepare AWS Aurora/PostgreSQL -summary: Redis Data Integration keeps Redis in sync with the primary database in near - real time. -type: integration -weight: 5 ---- - -Follow the steps in the sections below to prepare an -[AWS Aurora PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_GettingStartedAurora.CreatingConnecting.AuroraPostgreSQL.html) -database to work with RDI. - -## 1. Create a parameter group - -In the [Relational Database Service (RDS) console](https://console.aws.amazon.com/rds/), -navigate to **Parameter groups > Create parameter group**. You will see the panel shown -below: - -{{Create parameter group panel}} - -Enter the following information: - -| Name | Value | -| :-- | :-- | -| **Parameter group name** | rdi-aurora-pg | -| **Description** | Enable logical replication for RDI | -| **Engine Type** | Aurora PostgreSQL | -| **Parameter group family** | aurora-postgresql15 | -| **Type** | DB Cluster Parameter Group | - -Select **Create** to create the parameter group. - -## 2. Edit the parameter group - -Navigate to **Parameter groups** in the console. Select the `rdi-aurora-pg` -group you have just created and then select **Edit** . You will see this panel: - -{{Edit parameter group panel}} - -Search for the `rds.logical_replication` parameter and set its value to 1. Then, -select **Save Changes**. - -## 3. Select the new parameter group - -Go back to your target database on the RDS console, select **Modify** and then -scroll down to **Additional Configuration**. Set -the **DB Cluster Parameter Group** to the value `rdi-aurora-pg` that you have just added: - -{{Additional Configuration panel}} diff --git a/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/_index.md b/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/_index.md new file mode 100644 index 000000000..061188eb1 --- /dev/null +++ b/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/_index.md @@ -0,0 +1,22 @@ +--- +Title: Prepare AWS RDS and Aurora databases for RDI +aliases: /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/aws-aurora-rds/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Learn how to prepare AWS RDS and Aurora databases for RDI. +group: di +linkTitle: Prepare AWS RDS and Aurora +summary: Prepare AWS Aurora and AWS RDS databases to work with Redis Data Integration. +hideListLinks: false +type: integration +weight: 5 +--- + +You can use RDI with databases on [AWS Relational Database Service (RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) and [AWS Aurora](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html). + +The pages in this section give detailed instructions to get your source +database ready for Debezium to use: \ No newline at end of file diff --git a/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/aws-aur-mysql.md b/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/aws-aur-mysql.md new file mode 100644 index 000000000..211aeae8c --- /dev/null +++ b/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/aws-aur-mysql.md @@ -0,0 +1,78 @@ +--- +Title: Prepare AWS Aurora MySQL/AWS RDS MySQL for RDI +aliases: /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/aws-aurora-rds/aws-aur-mysql/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Enable CDC features in your source databases +group: di +hideListLinks: false +linkTitle: Prepare AWS Aurora/RDS MySQL +summary: Prepare AWS Aurora MySQL and AWS RDS MySQL databases to work with Redis Data Integration. +type: integration +weight: 2 +--- + +Follow the steps in the sections below to prepare an [AWS Aurora MySQL](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_GettingStartedAurora.CreatingConnecting.Aurora.html) or [AWS RDS MySQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html) database. +database to work with RDI. + +## Create and apply parameter group + +RDI requires some changes to database parameters. On AWS RDS and AWS Aurora, you change these parameters via a parameter group. + +1. In the [Relational Database Service (RDS) console](https://console.aws.amazon.com/rds/),navigate to **Parameter groups > Create parameter group**. [Create a parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithParamGroups.CreatingCluster.html) with the following settings: + + | Name | Value | + | :-- | :-- | + | **Parameter group name** | Enter a suitable parameter group name, like `rdi-mysql` | + | **Description** | (Optional) Enter a description for the parameter group | + | **Engine Type** | Choose **Aurora MySQL** for Aurora MySQL or **MySQL Community** for AWS RDS MySQL. | + | **Parameter group family** | Choose **aurora-mysql8.0** for Aurora MySQL or **mysql8.0** for AWS RDS MySQL. | + + Select **Create** to create the parameter group. + +1. Navigate to **Parameter groups** in the console. Select the parameter group you have just created and then select **Edit**. Change the following parameters: + + | Name | Value | + | :-- | :-- | + | `binlog_format` | `ROW` | + | `binlog_row_image` | `FULL` | + + Select **Save Changes** to apply the changes to the parameter group. + +1. Go back to your target database on the RDS console, select **Modify** and then scroll down to **Additional Configuration**. Set the **DB Cluster Parameter Group** to the group you just created. + + Select **Save changes** to apply the parameter group to the new database. + +## Create Debezium user + +The Debezium connector needs a user account to connect to MySQL. This +user must have appropriate permissions on all databases where you want Debezium +to capture changes. + +1. Connect to your database as an admin user and create a new user for the connector: + + ```sql + CREATE USER ''@'%' IDENTIFIED BY ''; + ``` + + Replace `` and `` with a username and password for the new user. + + The `%` means that the user can connect from any client. If you want to restrict the user to connect only from the RDI host, replace `%` with the IP address of the RDI host. + +1. Grant the user the necessary permissions: + + ```sql + GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT, LOCK TABLES ON *.* TO ''@'%'; + ``` + + Replace `` with the username of the Debezium user. + +1. Finalize the user's permissions: + + ```sql + FLUSH PRIVILEGES; + ``` diff --git a/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/aws-aur-pgsql.md b/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/aws-aur-pgsql.md new file mode 100644 index 000000000..ec2dee520 --- /dev/null +++ b/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/aws-aur-pgsql.md @@ -0,0 +1,82 @@ +--- +Title: Prepare AWS Aurora PostgreSQL/AWS RDS PostgreSQL for RDI +aliases: +- /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/aws-aur-pgsql/ +- /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/aws-aurora-rds/aws-aur-pgsql/ +- /integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aur-pgsql/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rc +- rdi +description: Prepare AWS Aurora PostgreSQL databases to work with RDI +group: di +linkTitle: Prepare AWS Aurora PostgreSQL +summary: Prepare AWS Aurora PostgreSQL databases to work with Redis Data Integration. +type: integration +weight: 1 +--- + +Follow the steps in the sections below to prepare an +[AWS Aurora PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_GettingStartedAurora.CreatingConnecting.AuroraPostgreSQL.html) or [AWS RDS PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.PostgreSQL.html) +database to work with RDI. + +## Create and apply parameter group + +RDI requires some changes to database parameters. On AWS RDS and AWS Aurora, you change these parameters via a parameter group. + +1. In the [Relational Database Service (RDS) console](https://console.aws.amazon.com/rds/), navigate to **Parameter groups > Create parameter group**. [Create a parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithParamGroups.CreatingCluster.html) with the following settings: + + | Name | Value | + | :-- | :-- | + | **Parameter group name** | Enter a suitable parameter group name, like `rdi-aurora-pg` or `rdi-rds-pg` | + | **Description** | (Optional) Enter a description for the parameter group | + | **Engine Type** | Choose **Aurora PostgreSQL** for Aurora PostgreSQL or **PostgreSQL** for AWS RDS PostgreSQL. | + | **Parameter group family** | Choose **aurora-postgresql15** for Aurora PostgreSQL or **postgresql13** for AWS RDS PostgreSQL. | + + Select **Create** to create the parameter group. + +1. Navigate to **Parameter groups** in the console. Select the group you have just created and then select **Edit**. Change the following parameters: + + | Name | Value | + | :-- | :-- | + | `rds.logical_replication` | `1` | + + Select **Save Changes** to apply the changes to the parameter group. + +1. Go back to your database on the RDS console, select **Modify** and then scroll down to **Additional Configuration**. Set the **DB Cluster Parameter Group** to the group you just created. + + Select **Save changes** to apply the parameter group to your database. + +## Create Debezium user + +The Debezium connector needs a user account to connect to PostgreSQL. This +user must have appropriate permissions on all databases where you want Debezium +to capture changes. + +1. Connect to PostgreSQL as the `postgres` user and create a new user for the connector: + + ```sql + CREATE ROLE WITH LOGIN PASSWORD '' VALID UNTIL 'infinity'; + ``` + + Replace `` and `` with a username and password for the new user. + +1. Grant the user the necessary replication permissions: + + ```sql + GRANT rds_replication TO ; + ``` + + Replace `` with the username of the Debezium user. + +1. Connect to your database as the `postgres` user and grant the new user access to one or more schemas in the database: + + ```sql + GRANT SELECT ON ALL TABLES IN SCHEMA TO ; + ``` + + Replace `` with the username of the Debezium user and `` with the schema name. + diff --git a/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/aws-rds-sqlserver.md b/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/aws-rds-sqlserver.md new file mode 100644 index 000000000..7f96b1948 --- /dev/null +++ b/content/integrate/redis-data-integration/data-pipelines/prepare-dbs/aws-aurora-rds/aws-rds-sqlserver.md @@ -0,0 +1,100 @@ +--- +Title: Prepare Microsoft SQL Server on AWS RDS for RDI +aliases: /integrate/redis-data-integration/ingest/data-pipelines/prepare-dbs/aws-aurora-rds/aws-rds-sqlserver/ +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: Enable CDC features in your source databases +group: di +hideListLinks: false +linkTitle: Prepare Microsoft SQL Server on AWS RDS +summary: Prepare Microsoft SQL Server on AWS RDS databases to work with Redis Data Integration. +type: integration +weight: 3 +--- + +Follow the steps in the sections below to prepare a [Microsoft SQL Server on AWS RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.SQLServer.html) database to work with RDI. + +## Create the Debezium user + +The Debezium connector needs a user account to connect to SQL Server. This +user must have appropriate permissions on all databases where you want Debezium +to capture changes. + +1. Connect to your database as an admin user and create a new user for the connector: + + ```sql + USE master + GO + CREATE LOGIN WITH PASSWORD = '' + GO + USE + GO + CREATE USER FOR LOGIN + GO + ``` + + Replace `` and `` with a username and password for the new user and replace `` with the name of your database. + +1. Grant the user the necessary permissions: + + ```sql + USE master + GO + GRANT VIEW SERVER STATE TO + GO + USE + GO + EXEC sp_addrolemember N'db_datareader', N'' + GO + ``` + + Replace `` with the username of the Debezium user and replace `` with the name of your database. + +## Enable CDC on the database + +Change Data Capture (CDC) must be enabled for the database and for each table you want to capture. + +1. Enable CDC for the database by running the following command: + + ```sql + EXEC msdb.dbo.rds_cdc_enable_db '' + GO + ``` + + Replace `` with the name of your database. + +1. Enable CDC for each table you want to capture by running the following commands: + + ```sql + USE + GO + EXEC sys.sp_cdc_enable_table + @source_schema = N'', + @source_name = N'', + @role_name = N'', + @supports_net_changes = 0 + GO + ``` + + Replace `` with the name of your database, `` with the name of the schema containing the table, `
` with the name of the table, and `` with the name of a new role that will be created to manage access to the CDC data. + + {{< note >}} +The value for `@role_name` can’t be a fixed database role, such as `db_datareader`. +Specifying a new name will create a corresponding database role that has full access to the +captured change data. + {{< /note >}} + +1. Add the Debezium user to the CDC role: + + ```sql + USE + GO + EXEC sp_addrolemember N'', N'' + GO + ``` + + Replace `` with the name of the role you created in the previous step and replace `` with the username of the Debezium user. \ No newline at end of file diff --git a/content/operate/rc/databases/rdi/_index.md b/content/operate/rc/databases/rdi/_index.md new file mode 100644 index 000000000..6462a415e --- /dev/null +++ b/content/operate/rc/databases/rdi/_index.md @@ -0,0 +1,69 @@ +--- +Title: Data Integration +alwaysopen: false +categories: +- docs +- operate +- rc +description: Use Redis Data Integration with Redis Cloud. +hideListLinks: true +weight: 99 +--- + +Redis Cloud now supports [Redis Data Integration (RDI)]({{}}), a fast and simple way to bring your data into Redis from other types of primary databases. + +A relational database usually handles queries much more slowly than a Redis database. If your application uses a relational database and makes many more reads than writes (which is the typical case) then you can improve performance by using Redis as a cache to handle the read queries quickly. Redis Cloud uses [ingest]({{}}) to help you offload all read queries from the application database to Redis automatically. + +Using a data pipeline lets you have a cache that is always ready for queries. RDI Data pipelines ensure that any changes made to your primary database are captured in your Redis cache within a few seconds, preventing cache misses and stale data within the cache. + +RDI helps Redis customers sync Redis Cloud with live data from their primary databases to: +- Meet the required speed and scale of read queries and provide an excellent and predictable user experience. +- Save resources and time when building pipelines and coding data transformations. +- Reduce the total cost of ownership by saving money on expensive database read replicas. + +Using RDI with Redis Cloud simplifies managing your data integration pipeline. No need to worry about hardware or underlying infrastructure, as Redis Cloud manages that for you. Creating the data flow from source to target is much easier, and there are validations in place to reduce errors. + +## Data pipeline architecture + +An RDI data pipeline sits between your source database and your target Redis database. Initially, the pipeline reads all of the data and imports it into the target database during the *initial sync* phase. After this initial sync is complete, the data pipeline enters the *streaming* phase, where changes are captured as they happen. Changes in the source database are added to the target within a few seconds of capture. The data pipeline translates relational database rows to Redis hashes or JSON documents. + +For more info on how RDI works, see [RDI Architecture]({{}}). + +### Pipeline security + +Data pipelines are set up to ensure a high level of data security. Source database credentials and TLS secrets are stored in AWS secret manager and shared using the Kubernetes CSI driver for secrets. See [Share source database credentials]({{}}) to learn how to share your source database credentials and TLS certificates with Redis Cloud. + +Connections to the source database use Java Database Connectivity (JDBC) through [AWS PrivateLink](https://aws.amazon.com/privatelink/), ensuring that the data pipeline is only exposed to the specific database endpoint. See [Set up connectivity]({{}}) to learn how to connect your PrivateLink to the Redis Cloud VPC. + +RDI encrypts all network connections with TLS. The pipeline will process data from the source database in-memory and write it to the target database using a TLS connection. There are no external connections to your data pipeline except from Redis Cloud management services. + +## Prerequisites + +Before you can create a data pipeline, you must have: + +- A [Redis Cloud Pro database]({{< relref "/operate/rc/databases/create-database/create-pro-database-new" >}}) hosted on Amazon Web Services (AWS). This will be the target database. +- One supported source database, hosted on an AWS EC2 instance, AWS RDS, or AWS Aurora: + +{{< embed-md "rdi-supported-source-versions.md" >}} + +{{< note >}} +Please be aware of the following limitations: + +- The target database must be a Redis Cloud Pro database hosted on Amazon Web Services (AWS). Redis Cloud Essentials databases and databases hosted on Google Cloud do not support Data Integration. +- The target database must use multi-zone [high availability]({{< relref "/operate/rc/databases/configuration/high-availability" >}}). +- The target database can use TLS, but can not use mutual TLS. +- The target database cannot be in the same subscription as another database that has a data pipeline. +- Source databases must also be hosted on AWS. +- You must use a [custom encryption key on AWS](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) to create the instance hosting the database. +- One source database can only be synced to one target database. +- You must be able to set up AWS PrivateLink to connect your Source database to your target database. RDI only works with AWS PrivateLink and not VPC Peering or other private connectivity options. +{{< /note >}} + +## Get started + +To create a new data pipeline, you need to: + +1. [Prepare your source database]({{}}) and any associated credentials. +2. [Define the source connection and data pipeline]({{}}) by selecting which tables to sync. + +Once your data pipeline is defined, you can [view and edit]({{}}) it. \ No newline at end of file diff --git a/content/operate/rc/databases/rdi/define.md b/content/operate/rc/databases/rdi/define.md new file mode 100644 index 000000000..8987ccac3 --- /dev/null +++ b/content/operate/rc/databases/rdi/define.md @@ -0,0 +1,96 @@ +--- +Title: Define data pipeline +alwaysopen: false +categories: +- docs +- operate +- rc +description: Define the source connection and data pipeline. +hideListLinks: true +weight: 2 +--- + +After you have [prepared your source database]({{}}) and connection information, you can set up your new pipeline. To do this: + +1. [Define the source connection](#define-source-connection) by entering all required source database information. +2. [Define the data pipeline](#define-data-pipeline) by selecting the data that you want to sync from your source database to the target database. + +## Define source connection + +1. In the [Redis Cloud console](https://cloud.redis.io/), go to your target database and select the **Data Pipeline** tab. +1. Select **Define source database**. + {{}} +1. Enter a **Pipeline name**. + {{}} +1. A **Deployment CIDR** is automatically generated for you. If, for any reason, a CIDR is not generated, enter a valid CIDR that does not conflict with your applications or other databases. +1. In the **Source database connectivity** section, enter the **PrivateLink service name** of the [PrivateLink connected to your source database]({{< relref "/operate/rc/databases/rdi/setup#set-up-connectivity" >}}). + {{}} +1. Enter your database details. This depends on your database type, and includes: + - **Port**: The database's port + - **Database**: Your database's name, or the root database *(PostgreSQL, Oracle only)*, or a comma-separated list of one or more databases you want to connect to *(SQL Server only)* + - **Database Server ID**: Unique ID for the replication client. Enter a number that is not used by any existing replication clients *(mySQL and mariaDB only)* + - **PDB**: Name of the Oracle pluggable database *(Oracle only)* +1. Enter the ARN of your [database credentials secret]({{< relref "/operate/rc/databases/rdi/setup#share-source-database-credentials" >}}) in the **Source database secrets ARN** field. +1. Select **Start pipeline setup**. + {{}} +1. Redis Cloud will attempt to connect to PrivateLink. If your PrivateLink does not allow automatic acceptance of incoming connections, accept the incoming connection on AWS PrivateLink to proceed. See [Accept or Reject PrivateLink connection requests](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#accept-reject-connection-requests). + + If Redis Cloud can't find your PrivateLink connection, make sure that the PrivateLink service name is correct and that Redis Cloud is listed as an Allowed Principal for your VPC. See [Set up connectivity]({{}}) for more info. + +At this point, Redis Cloud will provision the pipeline infrastructure that will allow you to define your data pipeline. + +{{}} + +Pipelines are provisioned in the background. You aren't allowed to make changes to your data pipeline or to your database during provisioning. This process will take about an hour, so you can close the window and come back later. + +When your pipeline is provisioned, select **Complete setup**. You will then [define your data pipeline](#define-data-pipeline). + +{{}} + +## Define data pipeline + +After your pipeline is provisioned, you will be able to define your pipeline. You will select the database schemas, tables, and columns that you want to import and synchronize with your primary database. + +### Configure a new pipeline + +1. In the [Redis Cloud console](https://cloud.redis.io/), go to your target database and select the **Data Pipeline** tab. If your pipeline is already provisioned, select **Complete setup** to go to the **Select data** section. + {{}} +1. Select the Schema and Tables you want to migrate to the target database from the **Source data selection** list. + {{}} + + You can select any number of columns from a table. + + {{}} + + If any tables are missing a unique constraint, the **Missing unique constraint** list will appear. Select the columns that define a unique constraint for those tables from the list. + + {{}} + + {{}} + + Select **Add schema** to add more database schemas. + + {{}} + + Select **Delete** to delete a schema. You must have at least one schema to continue. + + {{}} + + After you've selected the schemas and tables you want to sync, select **Continue**. + + {{}} + +1. In the **Pipeline definition** section, select the Redis data type to write keys to the target. You can choose **Hash** or **JSON** if the target database supports JSON. + {{}} + Select **Continue**. + {{}} + +1. Review the tables you selected in the **Summary**. If everything looks correct, select **Start ingest** to start ingesting data from your source database. + + {{}} + +At this point, the data pipeline will ingest data from the source database to your target Redis database. This process will take time, especially if you have a lot of records in your source database. + +After this initial sync is complete, the data pipeline enters the *change streaming* phase, where changes are captured as they happen. Changes in the source database are added to the target within a few seconds of capture. + +You can view the status of your data pipeline in the **Data pipeline** tab of your database. See [View and edit data pipeline]({{}}) to learn more. \ No newline at end of file diff --git a/content/operate/rc/databases/rdi/setup.md b/content/operate/rc/databases/rdi/setup.md new file mode 100644 index 000000000..5242a4ca7 --- /dev/null +++ b/content/operate/rc/databases/rdi/setup.md @@ -0,0 +1,253 @@ +--- +Title: Prepare source database +alwaysopen: false +categories: +- docs +- operate +- rc +description: Prepare your source database, network setup, and database credentials for Data integration. +hideListLinks: true +weight: 1 +--- + +## Create new data pipeline + +1. In the [Redis Cloud console](https://cloud.redis.io/), go to your target database and select the **Data Pipeline** tab. +1. Select **Create data pipeline**. + {{}} +1. Select your source database type. The following database types are supported: + - MySQL + - mariaDB + - Oracle + - SQL Server + - PostgreSQL + {{}} +1. If you know the size of your source database, enter it into the **Source dataset size** field. + {{}} +1. Under **Setup connectivity**, save the provided ARN and extract the AWS account ID for the account associated with your Redis Cloud cluster from it. + + {{}} + + The AWS account ID is the string of numbers after `arn:aws:iam::` in the ARN. For example, if the ARN is `arn:aws:iam::123456789012:role/redis-data-pipeline`, the AWS account ID is `123456789012`. + +## Prepare source database + +Before using the pipeline, you must first prepare your source database to use the Debezium connector for change data capture (CDC). See [Prerequisites]({{}}) to find a list of supported source databases and database versions. + +See [Prepare source databases]({{}}) to find steps for your database type: +- Hosted on an AWS EC2 instance: + - [MySQL and mariaDB]({{}}) + - [Oracle]({{}}) + - [SQL Server]({{}}) + - [PostgreSQL]({{}}) +- Hosted on AWS RDS or AWS Aurora: + - [AWS Aurora PostgreSQL and AWS RDS PostgreSQL]({{}}) + - [AWS Aurora MySQL and AWS RDS MySQL]({{}}) + - [AWS RDS SQL Server]({{}}) + +See the [RDI architecture overview]({{< relref "/integrate/redis-data-integration/architecture#overview" >}}) for more information about CDC. + +## Set up connectivity + +To ensure that you can connect your Redis Cloud database to the source database, you need to set up an endpoint service through AWS PrivateLink. + +Choose the steps for your database setup: +- [Database hosted on an AWS EC2 instance](#database-hosted-on-an-aws-ec2-instance) +- [Database hosted on AWS RDS or AWS Aurora](#database-hosted-on-aws-rds-or-aws-aurora) + +### Database hosted on an AWS EC2 instance + +The following diagram shows the network setup for a database hosted on an AWS EC2 instance. + +{{}} + +To do this: + +1. [Create a network load balancer](#create-network-load-balancer-ec2) that will route incoming HTTP requests to your database. +1. [Create an endpoint service](#create-endpoint-service-ec2) through AWS PrivateLink. + +#### Create network load balancer {#create-network-load-balancer-ec2} + +In the [AWS Management Console](https://console.aws.amazon.com/), use the **Services** menu to locate and select **Compute** > **EC2**. [Create a network load balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html#configure-load-balancer) with the following settings: + +1. In **Basic configuration**: + - **Scheme**: Select **Internal**. + - **Load balancer IP address type**: Select **IPv4**. +1. In **Network mapping**, select the VPC and availability zone associated with your source database. +1. In **Security groups**, select the security group associated with your source database. +1. In **Listeners and routing**: + 1. Select **Create target group** to [create a target group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-target-group.html) with the following settings: + 1. In **Specify group details**: + - **Target type**: Select **Instances**. + - **Protocol : Port**: Select **TCP**, and then enter the port number where your database is exposed. + - The **IP address type** and **VPC** should be selected already and match the VPC you selected earlier. + 1. In **Register targets**, select the EC2 instance that runs your source database, enter the port, and select **Include as pending below**. Then, select **Create target group** to create your target group. Return **Listeners and routing** in the Network Load Balancer setup. + 1. Set the following **Listener** properties: + - **Protocol**: Select **TCP**. + - **Port**: Enter your source database's port. + - **Default action**: Select the target group you created in the previous step. +1. Review the network load balancer settings, and then select **Create load balancer** to continue. +1. After the network load balancer is active, select **Security**, and then select the security group ID to open the Security group settings. +1. Select **Edit inbound rules**, then **Add rule** to add a rule with the following settings: + - **Type**: Select **HTTP**. + - **Source**: Select **Anywhere - IPv4**. + Select **Save rules** to save your changes. + +#### Create endpoint service {#create-endpoint-service-ec2} + +In the [AWS Management Console](https://console.aws.amazon.com/), use the **Services** menu to locate and select **Networking & Content Delivery** > **VPC**. There, select **PrivateLink and Lattice** > **Endpoint services**. [Create an endpoint service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) with the following settings: + +1. In **Available load balancers**, select the [network load balancer](#create-network-load-balancer-ec2) you created. +1. In **Additional settings**, choose the following settings: + - **Require acceptance for endpoint**: Select **Acceptance required**. + - **Supported IP address types**: Select **IPv4**. +1. Select **Create** to create the endpoint service. + +After you create the endpoint service, you need to add Redis Cloud as an Allowed Principal on your [endpoint service VPC permissions](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions). + +1. In the Redis Cloud Console, copy the Amazon Resource Name (ARN) provided in the **Setup connectivity** section. +1. Return to the endpoint service list on the [Amazon VPC console](https://console.aws.amazon.com/vpc/). Select the endpoint service you just created. +1. Navigate to **Allow principals** tab. +1. Add the Redis Cloud ARN you copied and choose **Allow principals**. +1. Save the service name for later. + +For more details on AWS PrivateLink, see [Share your services through AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-share-your-services.html). + +### Database hosted on AWS RDS or AWS Aurora + +The following diagram shows the network setup for a database hosted on AWS RDS or AWS Aurora. + +{{}} + +To do this: + +1. [Create an RDS Proxy](#create-rds-proxy) that will route requests to your database. +1. [Create a network load balancer](#create-network-load-balancer-rds) that will route incoming HTTP requests to the RDS proxy. +1. [Create an endpoint service](#create-endpoint-service-rds) through AWS PrivateLink. + +#### Create RDS proxy {#create-rds-proxy} + +In the [AWS Management Console](https://console.aws.amazon.com/), use the **Services** menu to locate and select **Database** > **Aurora and RDS**. [Create an RDS proxy](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy-creating.html) that can access your database. + +#### Create network load balancer {#create-network-load-balancer-rds} + +In the [AWS Management Console](https://console.aws.amazon.com/), use the **Services** menu to locate and select **Compute** > **EC2**. [Create a network load balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html#configure-load-balancer) with the following settings: + +1. In **Basic configuration**: + - **Scheme**: Select **Internal**. + - **Load balancer IP address type**: Select **IPv4**. +1. In **Network mapping**, select the VPC and availability zone associated with your source database. +1. In **Security groups**, select the security group associated with your source database. +1. In **Listeners and routing**: + 1. Select **Create target group** to [create a target group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-target-group.html) with the following settings: + 1. In **Specify group details**: + - **Target type**: Select **IP Addresses**. + - **Protocol : Port**: Select **TCP**, and then enter the port number where your database is exposed. + - The **IP address type** and **VPC** should be selected already and match the VPC you selected earlier. + 1. In **Register targets**, enter the static IP address of your RDS proxy, enter the port, and select **Include as pending below**. Then, select **Create target group** to create your target group. Return **Listeners and routing** in the Network Load Balancer setup. + To get the static IP address of your RDS Proxy, run the following command on an EC2 instance in the same VPC as the Proxy: + ```sh + $ nslookup + ``` + Replace `` with the endpoint of your RDS proxy. + 1. Set the following **Listener** properties: + - **Protocol**: Select **TCP**. + - **Port**: Enter your source database's port. + - **Default action**: Select the target group you created in the previous step. +1. Review the network load balancer settings, and then select **Create load balancer** to continue. +1. After the network load balancer is active, select **Security**, and then select the security group ID to open the Security group settings. +1. Select **Edit inbound rules**, then **Add rule** to add a rule with the following settings: + - **Type**: Select **HTTP**. + - **Source**: Select **Anywhere - IPv4**. + Select **Save rules** to save your changes. + +#### Create endpoint service {#create-endpoint-service-rds} + +In the [AWS Management Console](https://console.aws.amazon.com/), use the **Services** menu to locate and select **Networking & Content Delivery** > **VPC**. There, select **PrivateLink and Lattice** > **Endpoint services**. [Create an endpoint service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) with the following settings: + +1. In **Available load balancers**, select the [network load balancer](#create-network-load-balancer-rds) you created. +1. In **Additional settings**, choose the following settings: + - **Require acceptance for endpoint**: Select **Acceptance required**. + - **Supported IP address types**: Select **IPv4**. +1. Select **Create** to create the endpoint service. + +After you create the endpoint service, you need to add Redis Cloud as an Allowed Principal on your [endpoint service VPC permissions](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions). + +1. In the Redis Cloud Console, copy the Amazon Resource Name (ARN) provided in the **Setup connectivity** section. +1. Return to the endpoint service list on the [Amazon VPC console](https://console.aws.amazon.com/vpc/). Select the endpoint service you just created. +1. Navigate to **Allow principals** tab. +1. Add the Redis Cloud ARN you copied and choose **Allow principals**. +1. Save the service name for later. + +For more details on AWS PrivateLink, see [Share your services through AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-share-your-services.html). + +## Share source database credentials + +You need to share your source database credentials and certificates in an Amazon secret with Redis Cloud so that the pipeline can connect to your database. + +To do this, you need to: +1. [Create an encryption key](#create-encryption-key) using AWS Key Management Service with the right permissions. +1. [Create a secret](#create-database-credentials-secret) containing the source database credentials encrypted using that key. + +### Create encryption key + +In the [AWS Management Console](https://console.aws.amazon.com/), use the **Services** menu to locate and select **Security, Identity, and Compliance** > **Key Management Service**. [Create an encryption key](https://docs.aws.amazon.com/kms/latest/developerguide/create-symmetric-cmk.html) with the following settings: + +1. In **Step 1 - Configure key**: + - **Key type**: Select **Symmetric**. + - **Key usage**: Select **Encrypt and decrypt**. + - Under **Advanced options**, set the following: + - **Key material origin**: Select **KMS - recommended**. + - **Regionality**: Select **Single-Region key**. +1. In **Step 2 - Add labels**, add an alias and description for the key. +1. In **Step 3 - Define key administrative permissions**, under **Key deletion**, select **Allow key administrators to delete this key**. +1. In **Step 4 - Define key usage permissions**, under **Other AWS accounts**, select **Add another AWS account**. Enter the AWS account ID for the Redis Cloud cluster that you saved earlier. + +Review the key policy and key settings, and then select **Finish** to create the key. + +### Create database credentials secret + +In the [AWS Management Console](https://console.aws.amazon.com/), use the **Services** menu to locate and select **Security, Identity, and Compliance** > **Secrets Manager**. [Create a secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html) of type **Other type of secret** with the following settings: + +- **Key/value pairs**: Enter the following key/value pairs. + + - `username`: Database username + - `password`: Database password + - `trust_certificate`: Server certificate in PEM format *(TLS only)* + - `client_public_key`: [X.509 client certificate](https://en.wikipedia.org/wiki/X.509) or chain in PEM format *(mTLS only)* + - `client_private_key`: Key for the client certificate or chain in PEM format *(mTLS only)* + - `client_private_key_passphrase`: Passphrase or password for the client certificate or chain in PEM format *(mTLS only)* + + {{}} +If your source database has TLS or mTLS enabled, we recommend that you enter the `trust_certificate`, `client_public_key`, and `client_private_key` into the secret editor using the **Key/Value** input method instead of the **JSON** input method. Pasting directly into the JSON editor may cause an error. + {{}} + +- **Encryption key**: Select the [encryption key](#create-encryption-key) you created earlier. + +- **Resource permissions**: Add the following permissions to your secret to allow the Redis data pipeline to access your secret. Replace `` with the AWS account ID for the Redis Cloud cluster that you saved earlier. + + ```json + { + "Version" : "2012-10-17", + "Statement" : [ { + "Sid" : "RedisDataIntegrationRoleAccess", + "Effect" : "Allow", + "Principal" : "*", + "Action" : [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], + "Resource" : "*", + "Condition" : { + "StringLike" : { + "aws:PrincipalArn" : "arn:aws:iam:::role/redis-data-pipeline-secrets-role" + } + } + } ] + } + ``` + +After you store this secret, you can view and copy the [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_iam-permissions.html#iam-resources) of your secret on the secret details page. + +## Next steps + +After you have set up your source database and prepared connectivity and credentials, select **Define source database** to [define your source connection and data pipeline]({{}}). + +{{}} \ No newline at end of file diff --git a/content/operate/rc/databases/rdi/view-edit.md b/content/operate/rc/databases/rdi/view-edit.md new file mode 100644 index 000000000..b16fbcf4a --- /dev/null +++ b/content/operate/rc/databases/rdi/view-edit.md @@ -0,0 +1,134 @@ +--- +Title: View and edit data pipeline +alwaysopen: false +categories: +- docs +- operate +- rc +description: Observe and change your data pipeline. +hideListLinks: true +weight: 3 +--- + +Use the **Data pipeline** tab in your database to view and edit your data pipeline. + +The **Data pipeline** tab gives an overview of your data pipeline and lets you view your data stream metrics. + +{{}} + +The **Status** table shows statistics for the whole data pipeline: +- **Status**: The status of the data pipeline. Possible statuses include: + | Status | Description | + |--------|-------------| + | {{}} | The data pipeline is ingesting all records from the source database into the target database. | + | {{}} | The data pipeline is capturing new changes from the source database as they happen. Changes in the source database are added to the target database within a few seconds. | + | {{}}| The data pipeline has been [stopped](#stop-and-restart-data-pipeline). | + | {{}} | There is an error in the data pipeline. [Reset the pipeline](#reset-data-pipeline) and contact support if the issue persists. | +- **Total ingested**: Total number of records ingested from the source database. +- **Total inserted**: Total number of records inserted into the target database. +- **Total filtered**: Total number of records filtered from being inserted into the target database. +- **Total rejected**: Total number of records that could not be parsed or inserted into the target database. + +The **Data stream metrics** table shows the following metrics for each data stream: +| Metric | Description | +|--------|-------------| +| **Name** | Name of the data stream. Each stream corresponds to a table from the source database. | +| **Total** | Total number of records that arrived from the source table. | +| **Pending** | Number of records from the source table that are waiting to be processed. | +| **Inserted** | Number of new records from the source table that have been written to the target database. | +| **Updated** | Number of updated records from the source table that have been updated in the target database. | +| **Deleted** | Number of deleted records from the source table that have been deleted in the target database. | +| **Filtered** | Number of records from the source table that were filtered from being inserted into the target database. | +| **Rejected** | Number of records from the source table that could not be parsed or inserted into the target database. | + +## Edit data pipeline + +To change the data you want to ingest from the data pipeline: + +1. From the **Data pipeline** tab, select **Edit**. + + {{}} + +1. Select the schema and tables you want to migrate to the target database from the **Source data selection** list. + + {{}} + + You can select any number of columns from a table. + + {{}} + + If any tables are missing a unique constraint, the **Missing unique constraint** list will appear. Select the columns that define a unique constraint for those tables from the list. + + {{}} + + {{}} + + Select **Add schema** to add more database schemas. + + {{}} + + Select **Delete** to delete a schema. You must have at least one schema to continue. + + {{}} + + After you've selected the schemas and tables you want to sync, select **Continue**. + + {{}} + +1. In the **Pipeline definition** section, select the Redis data type to write keys to the target. You can choose **Hash** or **JSON** if the target database supports JSON. + + {{}} + + Select **Continue**. + + {{}} + +1. Review the tables you selected in the **Summary** and select how you want to update the data pipeline: + + {{}} + + - **Apply to new data changes only**: The data pipeline will only synchronize new updates to the schema and tables selected. The data pipeline will not ingest any data from new schemas or tables that are selected. + - **Reset pipeline (re-process all data)**: The data pipeline will re-ingest all of the selected data. + - **Flush cached data and reset pipeline**: The data pipeline will flush the target Redis database, and then re-ingest all of the selected data from the source database. + +1. Select **Apply changes**. + + {{}} + +At this point, the data pipeline will apply the changes. If you selected **Reset pipeline** or **Flush cached data and reset pipeline**, the data pipeline will ingest data from the source database to the target database. After this initial sync is complete, the data pipeline enters the *change streaming* phase, where changes are captured as they happen. + +If you selected **Apply to new data changes only**, the data pipeline will enter the *change streaming* phase without ingesting data. + +## Reset data pipeline + +Resetting the data pipeline creates a new baseline snapshot from the current state of your source database, and re-processes the data from the source database to the target Redis database. You may want to reset the pipeline if the source and target databases were disconnected or you made large changes to the data pipeline. + +To reset the data pipeline and restart the ingest process: + +1. From the **Data pipeline** tab, select **More actions**, and then **Reset pipeline**. + +1. If you want to flush the database, check **Flush target database**. + +1. Select **Reset data pipeline**. + +At this point, the data pipeline will re-ingest data from the source database to your target Redis database. + +## Stop and restart data pipeline + +To stop the data pipeline from synchronizing new data: + +1. From the **Data pipeline** tab, select **More actions**, and then **Stop pipeline**. + +1. Select **Stop data pipeline** to confirm. + +Stopping the data pipeline will suspend data processing. To restart the pipeline from the **Data pipeline** tab, select **More actions**, and then **Start pipeline**. + +## Delete pipeline + +To delete the data pipeline: + +1. From the **Data pipeline** tab, select **More actions**, and then **Delete pipeline**. + +1. Select **Delete data pipeline** to confirm. + +Deleted data pipelines cannot be recovered. \ No newline at end of file diff --git a/static/images/rc/rdi/pipeline-status-error.png b/static/images/rc/rdi/pipeline-status-error.png new file mode 100644 index 000000000..d0c10cee8 Binary files /dev/null and b/static/images/rc/rdi/pipeline-status-error.png differ diff --git a/static/images/rc/rdi/pipeline-status-initial-sync.png b/static/images/rc/rdi/pipeline-status-initial-sync.png new file mode 100644 index 000000000..8cf4beaf5 Binary files /dev/null and b/static/images/rc/rdi/pipeline-status-initial-sync.png differ diff --git a/static/images/rc/rdi/pipeline-status-stopped.png b/static/images/rc/rdi/pipeline-status-stopped.png new file mode 100644 index 000000000..cace177ea Binary files /dev/null and b/static/images/rc/rdi/pipeline-status-stopped.png differ diff --git a/static/images/rc/rdi/pipeline-status-streaming.png b/static/images/rc/rdi/pipeline-status-streaming.png new file mode 100644 index 000000000..912c8c2b2 Binary files /dev/null and b/static/images/rc/rdi/pipeline-status-streaming.png differ diff --git a/static/images/rc/rdi/rdi-add-schema.png b/static/images/rc/rdi/rdi-add-schema.png new file mode 100644 index 000000000..d33a0be92 Binary files /dev/null and b/static/images/rc/rdi/rdi-add-schema.png differ diff --git a/static/images/rc/rdi/rdi-apply-changes.png b/static/images/rc/rdi/rdi-apply-changes.png new file mode 100644 index 000000000..01e1d2328 Binary files /dev/null and b/static/images/rc/rdi/rdi-apply-changes.png differ diff --git a/static/images/rc/rdi/rdi-complete-setup.png b/static/images/rc/rdi/rdi-complete-setup.png new file mode 100644 index 000000000..e5467ef40 Binary files /dev/null and b/static/images/rc/rdi/rdi-complete-setup.png differ diff --git a/static/images/rc/rdi/rdi-configure-new-pipeline.png b/static/images/rc/rdi/rdi-configure-new-pipeline.png new file mode 100644 index 000000000..4ceeb8fa2 Binary files /dev/null and b/static/images/rc/rdi/rdi-configure-new-pipeline.png differ diff --git a/static/images/rc/rdi/rdi-continue-button.png b/static/images/rc/rdi/rdi-continue-button.png new file mode 100644 index 000000000..704086a16 Binary files /dev/null and b/static/images/rc/rdi/rdi-continue-button.png differ diff --git a/static/images/rc/rdi/rdi-create-data-pipeline.png b/static/images/rc/rdi/rdi-create-data-pipeline.png new file mode 100644 index 000000000..0bbee9396 Binary files /dev/null and b/static/images/rc/rdi/rdi-create-data-pipeline.png differ diff --git a/static/images/rc/rdi/rdi-define-connectivity.png b/static/images/rc/rdi/rdi-define-connectivity.png new file mode 100644 index 000000000..e0f297a5c Binary files /dev/null and b/static/images/rc/rdi/rdi-define-connectivity.png differ diff --git a/static/images/rc/rdi/rdi-define-pipeline-cidr.png b/static/images/rc/rdi/rdi-define-pipeline-cidr.png new file mode 100644 index 000000000..d40ca25af Binary files /dev/null and b/static/images/rc/rdi/rdi-define-pipeline-cidr.png differ diff --git a/static/images/rc/rdi/rdi-define-source-database.png b/static/images/rc/rdi/rdi-define-source-database.png new file mode 100644 index 000000000..ab1b9e445 Binary files /dev/null and b/static/images/rc/rdi/rdi-define-source-database.png differ diff --git a/static/images/rc/rdi/rdi-delete-schema.png b/static/images/rc/rdi/rdi-delete-schema.png new file mode 100644 index 000000000..003cfe3b5 Binary files /dev/null and b/static/images/rc/rdi/rdi-delete-schema.png differ diff --git a/static/images/rc/rdi/rdi-edit-button.png b/static/images/rc/rdi/rdi-edit-button.png new file mode 100644 index 000000000..28cbec8c8 Binary files /dev/null and b/static/images/rc/rdi/rdi-edit-button.png differ diff --git a/static/images/rc/rdi/rdi-missing-unique-constraint.png b/static/images/rc/rdi/rdi-missing-unique-constraint.png new file mode 100644 index 000000000..b1eb80b73 Binary files /dev/null and b/static/images/rc/rdi/rdi-missing-unique-constraint.png differ diff --git a/static/images/rc/rdi/rdi-pipeline-setup-in-progress.png b/static/images/rc/rdi/rdi-pipeline-setup-in-progress.png new file mode 100644 index 000000000..d63d5b126 Binary files /dev/null and b/static/images/rc/rdi/rdi-pipeline-setup-in-progress.png differ diff --git a/static/images/rc/rdi/rdi-select-columns.png b/static/images/rc/rdi/rdi-select-columns.png new file mode 100644 index 000000000..23d042055 Binary files /dev/null and b/static/images/rc/rdi/rdi-select-columns.png differ diff --git a/static/images/rc/rdi/rdi-select-constraints.png b/static/images/rc/rdi/rdi-select-constraints.png new file mode 100644 index 000000000..2ed7d04eb Binary files /dev/null and b/static/images/rc/rdi/rdi-select-constraints.png differ diff --git a/static/images/rc/rdi/rdi-select-source-data.png b/static/images/rc/rdi/rdi-select-source-data.png new file mode 100644 index 000000000..d19e5046d Binary files /dev/null and b/static/images/rc/rdi/rdi-select-source-data.png differ diff --git a/static/images/rc/rdi/rdi-select-source-db.png b/static/images/rc/rdi/rdi-select-source-db.png new file mode 100644 index 000000000..db039556c Binary files /dev/null and b/static/images/rc/rdi/rdi-select-source-db.png differ diff --git a/static/images/rc/rdi/rdi-setup-connectivity-arn.png b/static/images/rc/rdi/rdi-setup-connectivity-arn.png new file mode 100644 index 000000000..3d3e74650 Binary files /dev/null and b/static/images/rc/rdi/rdi-setup-connectivity-arn.png differ diff --git a/static/images/rc/rdi/rdi-setup-diagram-aurora.png b/static/images/rc/rdi/rdi-setup-diagram-aurora.png new file mode 100644 index 000000000..212e2f3a6 Binary files /dev/null and b/static/images/rc/rdi/rdi-setup-diagram-aurora.png differ diff --git a/static/images/rc/rdi/rdi-setup-diagram-ec2.png b/static/images/rc/rdi/rdi-setup-diagram-ec2.png new file mode 100644 index 000000000..c5cee65fe Binary files /dev/null and b/static/images/rc/rdi/rdi-setup-diagram-ec2.png differ diff --git a/static/images/rc/rdi/rdi-source-dataset-size.png b/static/images/rc/rdi/rdi-source-dataset-size.png new file mode 100644 index 000000000..35b78e2ec Binary files /dev/null and b/static/images/rc/rdi/rdi-source-dataset-size.png differ diff --git a/static/images/rc/rdi/rdi-start-ingest.png b/static/images/rc/rdi/rdi-start-ingest.png new file mode 100644 index 000000000..b23878a72 Binary files /dev/null and b/static/images/rc/rdi/rdi-start-ingest.png differ diff --git a/static/images/rc/rdi/rdi-start-pipeline-setup.png b/static/images/rc/rdi/rdi-start-pipeline-setup.png new file mode 100644 index 000000000..2152f92da Binary files /dev/null and b/static/images/rc/rdi/rdi-start-pipeline-setup.png differ diff --git a/static/images/rc/rdi/rdi-status-metrics-tables.png b/static/images/rc/rdi/rdi-status-metrics-tables.png new file mode 100644 index 000000000..8c699fb18 Binary files /dev/null and b/static/images/rc/rdi/rdi-status-metrics-tables.png differ diff --git a/static/images/rc/rdi/rdi-update-preferences.png b/static/images/rc/rdi/rdi-update-preferences.png new file mode 100644 index 000000000..7774ca2ba Binary files /dev/null and b/static/images/rc/rdi/rdi-update-preferences.png differ