diff --git a/CHANGELOG.md b/CHANGELOG.md
index 662aff5..97ebad7 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,9 +5,22 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [1.4.0] - 2023-03-29
+
+### Changed
+
+- Python library updates
+- Upgraded Python runtime to 3.9
+- Using `performAutoML` field in creating solutions now logs error, but proceeds to build the solution. This field is deprecated by the service.
+
+### Added
+
+- Github [issue #16](https://github.com/aws-solutions/maintaining-personalized-experiences-with-machine-learning/issues/16) `tags` are supported for all component types, for example, dataset group, import jobs, solutions, etc. Root-level tags are also supported in the config.
+- "UPDATE" model training is supported for input solutions trained with the User-Personalization recipe or the HRNN-Coldstart recipe.
+
## [1.3.1] - 2022-12-19
-### Fixed
+### Fixed
- GitHub [issue #19](https://github.com/aws-solutions/maintaining-personalized-experiences-with-machine-learning/issues/19). This fix prevents AWS Service Catalog AppRegistry Application Name and Attribute Group Name from using a string that begins with `AWS`, since strings begining with `AWS` are considered as reserved words by the AWS Service.
@@ -15,7 +28,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Locked `boto3` to version `1.25.5`, and upgraded python library packages.
-
## [1.3.0] - 2022-11-17
### Added
diff --git a/README.md b/README.md
index 7685bca..154d485 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,7 @@
# Maintaining Personalized Experiences with Machine Learning
-The Maintaining Personalized Experiences with Machine Learning solution provides a mechanism to automate much of the
-workflow around Amazon Personalize. This includes dataset group creation, dataset creation and import, solution
+
+The Maintaining Personalized Experiences with Machine Learning solution provides a mechanism to automate much of the
+workflow around Amazon Personalize. This includes dataset group creation, dataset creation and import, solution
creation, solution version creation, campaign creation and batch inference job creation
Scheduled rules can be configured for setting up import jobs, solution version retraining (with campaign update) and
@@ -14,11 +15,11 @@ batch inference job creation.
- [Creating a custom build](#creating-a-custom-build)
- [Collection of operational metrics](#collection-of-operational-metrics)
-## Architecture
+## Architecture
The following describes the architecture of the solution
-![architecture](source/images/solution-architecture.jpg)
+![architecture](source/images/solution-architecture.png)
The AWS CloudFormation template deploys the resources required to automate your Amazon Personalize usage and deployments.
The template includes the following components:
@@ -26,35 +27,34 @@ The template includes the following components:
1. An Amazon S3 bucket used to store personalization data and configuration files.
2. An AWS Lambda function triggered when new/ updated personalization configuration is uploaded to the personalization data bucket.
3. An AWS Stepfunctions workflow to manage all of the resources of an Amazon Personalize dataset group (including datasets, schemas, event tracker, filters, solutions, campaigns, and batch inference jobs).
-4. CloudWatch metrics for Amazon Personalize for each new trained solution version are added to help you evaluate the performance of a model over time.
-5. An Amazon Simple Notification Service (SNS) topic and subscription to notify an administrator when the maintenance workflow has completed via email.
-6. DynamoDB is used to track the scheduled events configured for Amazon Personalize to fully or partially retrain solutions, (re) import datasets and perform batch inference jobs.
+4. Amazon CloudWatch metrics for Amazon Personalize for each new trained solution version are added to help you evaluate the performance of a model over time.
+5. An Amazon Simple Notification Service (SNS) topic and subscription to notify an administrator when the maintenance workflow has completed via email.
+6. Amazon DynamoDB is used to track the scheduled events configured for Amazon Personalize to fully or partially retrain solutions, (re) import datasets and perform batch inference jobs.
7. An AWS Stepfunctions workflow is used to track the current running scheduled events, and invokes step functions to perform solution maintenance (creating new solution versions, updating campaigns), import updated datasets, and perform batch inference.
8. A set of maintenance AWS Stepfunctions workflows are provided to:
- 1. Create new dataset import jobs on schedule
- 2. Perform solution FULL retraining on schedule (and update associated campaigns)
- 3. Perform solution UPDATE retraining on schedule (and update associated campaigns)
- 4. Create batch inference jobs
+ 1. Create new dataset import jobs on schedule
+ 2. Perform solution FULL retraining on schedule (and update associated campaigns)
+ 3. Perform solution UPDATE retraining on schedule (and update associated campaigns)
+ 4. Create batch inference jobs
9. An Amazon EventBridge event bus, where resource status notification updates are posted throughout the AWS Step
-functions workflow
+ functions workflow
10. A command line interface (CLI) lets existing resources be imported and allows schedules to be established for
-resources that already exist in Amazon Personalize
-
+ resources that already exist in Amazon Personalize
-**Note**: From v1.0.0, AWS CloudFormation template resources are created by the [AWS CDK](https://aws.amazon.com/cdk/)
-and [AWS Solutions Constructs](https://aws.amazon.com/solutions/constructs/).
+> **Note**: From v1.0.0, AWS CloudFormation template resources are created by the [AWS CDK](https://aws.amazon.com/cdk/)
+> and [AWS Solutions Constructs](https://aws.amazon.com/solutions/constructs/).
-### AWS CDK Constructs
+### AWS CDK Constructs
[AWS CDK Solutions Constructs](https://aws.amazon.com/solutions/constructs/) make it easier to consistently create
-well-architected applications. All AWS Solutions Constructs are reviewed by AWS and use best practices established by
-the AWS Well-Architected Framework. This solution uses the following AWS CDK Solutions Constructs:
+well-architected applications. All AWS Solutions Constructs are reviewed by AWS and use best practices established by
+the AWS Well-Architected Framework. This solution uses the following AWS CDK Solutions Constructs:
- [aws-lambda-sns](https://docs.aws.amazon.com/solutions/latest/constructs/aws-lambda-sns.html)
## Deployment
-You can launch this solution with one click from [AWS Solutions Implementations](https://aws.amazon.com/solutions/implementations/maintaining-personalized-experiences-with-ml).
+You can launch this solution with one click from [AWS Solutions Implementations](https://aws.amazon.com/solutions/implementations/maintaining-personalized-experiences-with-ml).
To customize the solution, or to contribute to the solution, see [Creating a custom build](#creating-a-custom-build)
@@ -63,7 +63,8 @@ To customize the solution, or to contribute to the solution, see [Creating a cus
This solution uses **parameter files**. The parameter file contains all the necessary information to create and maintain
your resources in Amazon Personalize.
-The file can contain the following sections
+The file can contain the following sections
+
- `datasetGroup`
- `datasets`
- `solutions` (can contain `campaigns` and `batchInferenceJobs`)
@@ -73,168 +74,168 @@ The file can contain the following sections
See a sample of the parameter file
-```json
+```json
{
- "datasetGroup": {
- "serviceConfig": {
- "name": "dataset-group-name"
- },
- "workflowConfig": {
- "schedules": {
- "import": "cron(0 */6 * * ? *)"
- }
- }
- },
- "datasets": {
- "users": {
- "dataset": {
- "serviceConfig": {
- "name": "users-data"
- }
- },
- "schema": {
- "serviceConfig": {
- "name": "users-schema",
- "schema": {
- "type": "record",
- "name": "users",
- "namespace": "com.amazonaws.personalize.schema",
- "fields": [
- {
- "name": "USER_ID",
- "type": "string"
- },
- {
- "name": "AGE",
- "type": "int"
- },
- {
- "name": "GENDER",
- "type": "string",
- "categorical": true
- }
- ]
- }
- }
- }
- },
- "interactions": {
- "dataset": {
- "serviceConfig": {
- "name": "interactions-data"
- }
- },
- "schema": {
- "serviceConfig": {
- "name": "interactions-schema",
- "schema": {
- "type": "record",
- "name": "interactions",
- "namespace": "com.amazonaws.personalize.schema",
- "fields": [
- {
- "name": "ITEM_ID",
- "type": "string"
- },
- {
- "name": "USER_ID",
- "type": "string"
- },
- {
- "name": "TIMESTAMP",
- "type": "long"
- },
- {
- "name": "EVENT_TYPE",
- "type": "string"
- },
- {
- "name": "EVENT_VALUE",
- "type": "float"
- }
- ]
- }
- }
- }
- }
- },
- "solutions": [
- {
- "serviceConfig": {
- "name": "sims-solution",
- "recipeArn": "arn:aws:personalize:::recipe/aws-sims"
- },
- "workflowConfig": {
- "schedules": {
- "full": "cron(0 0 ? * 1 *)"
- }
- }
- },
- {
- "serviceConfig": {
- "name": "popularity-count-solution",
- "recipeArn": "arn:aws:personalize:::recipe/aws-popularity-count"
- },
- "workflowConfig": {
- "schedules": {
- "full": "cron(0 1 ? * 1 *)"
- }
- }
- },
- {
- "serviceConfig": {
- "name": "user-personalization-solution",
- "recipeArn": "arn:aws:personalize:::recipe/aws-user-personalization"
- },
- "workflowConfig": {
- "schedules": {
- "full": "cron(0 2 ? * 1 *)"
- }
- },
- "campaigns": [
- {
- "serviceConfig": {
- "name": "user-personalization-campaign",
- "minProvisionedTPS": 1
- }
- }
- ],
- "batchInferenceJobs": [
- {
- "serviceConfig": {},
- "workflowConfig": {
- "schedule": "cron(0 3 * * ? *)"
- }
- }
- ]
- }
- ],
- "eventTracker": {
- "serviceConfig": {
- "name": "dataset-group-name-event-tracker"
- }
- },
- "filters": [
- {
- "serviceConfig": {
- "name": "clicked-or-streamed",
- "filterExpression": "INCLUDE ItemID WHERE Interactions.EVENT_TYPE in (\"click\", \"stream\")"
- }
- },
- {
- "serviceConfig": {
- "name": "interacted",
- "filterExpression": "INCLUDE ItemID WHERE Interactions.EVENT_TYPE in (\"*\")"
- }
- }
- ]
+ "datasetGroup": {
+ "serviceConfig": {
+ "name": "dataset-group-name-1"
+ },
+ "workflowConfig": {
+ "schedules": {
+ "import": "cron(0 */6 * * ? *)"
+ }
+ }
+ },
+ "datasets": {
+ "users": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "users-data"
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "users-schema",
+ "schema": {
+ "type": "record",
+ "name": "users",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "USER_ID",
+ "type": "string"
+ },
+ {
+ "name": "AGE",
+ "type": "int"
+ },
+ {
+ "name": "GENDER",
+ "type": "string",
+ "categorical": true
+ }
+ ]
+ }
+ }
+ }
+ },
+ "interactions": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "interactions-data"
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "interactions-schema",
+ "schema": {
+ "type": "record",
+ "name": "interactions",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "ITEM_ID",
+ "type": "string"
+ },
+ {
+ "name": "USER_ID",
+ "type": "string"
+ },
+ {
+ "name": "TIMESTAMP",
+ "type": "long"
+ },
+ {
+ "name": "EVENT_TYPE",
+ "type": "string"
+ },
+ {
+ "name": "EVENT_VALUE",
+ "type": "float"
+ }
+ ]
+ }
+ }
+ }
+ }
+ },
+ "solutions": [
+ {
+ "serviceConfig": {
+ "name": "sims-solution",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-sims"
+ },
+ "workflowConfig": {
+ "schedules": {
+ "full": "cron(0 0 ? * 1 *)"
+ }
+ }
+ },
+ {
+ "serviceConfig": {
+ "name": "popularity-count-solution",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-popularity-count"
+ },
+ "workflowConfig": {
+ "schedules": {
+ "full": "cron(0 1 ? * 1 *)"
+ }
+ }
+ },
+ {
+ "serviceConfig": {
+ "name": "user-personalization-solution",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-user-personalization"
+ },
+ "workflowConfig": {
+ "schedules": {
+ "full": "cron(0 2 ? * 1 *)"
+ }
+ },
+ "campaigns": [
+ {
+ "serviceConfig": {
+ "name": "user-personalization-campaign",
+ "minProvisionedTPS": 1
+ }
+ }
+ ],
+ "batchInferenceJobs": [
+ {
+ "serviceConfig": {},
+ "workflowConfig": {
+ "schedule": "cron(0 3 * * ? *)"
+ }
+ }
+ ]
+ }
+ ],
+ "eventTracker": {
+ "serviceConfig": {
+ "name": "dataset-group-name-event-tracker"
+ }
+ },
+ "filters": [
+ {
+ "serviceConfig": {
+ "name": "clicked-or-streamed",
+ "filterExpression": "INCLUDE ItemID WHERE Interactions.EVENT_TYPE in (\"click\", \"stream\")"
+ }
+ },
+ {
+ "serviceConfig": {
+ "name": "interacted",
+ "filterExpression": "INCLUDE ItemID WHERE Interactions.EVENT_TYPE in (\"*\")"
+ }
+ }
+ ]
}
```
-This solution allows you to manage multiple dataset groups through the use of multiple parameter files. All .json files
-discovered under the `train/` prefix will trigger the workflow however, the following structure is recommended:
+This solution allows you to manage multiple dataset groups through the use of multiple parameter files. All .json files
+discovered under the `train/` prefix will trigger the workflow however, the following structure is recommended:
```
train/
@@ -244,7 +245,7 @@ train/
│ ├── interactions.csv
│ ├── items.csv (optional)
│ └── users.csv (optional)
-│
+│
└── / (option 2 - multiple csv files for data import)
├── config.json
├── interactions/
@@ -261,7 +262,7 @@ train/
└── .csv
```
-If batch inference jobs are required, [batch inference job configuration files](https://docs.aws.amazon.com/personalize/latest/dg/recommendations-batch.html#batch-data-upload)
+If batch inference jobs are required, [batch inference job configuration files](https://docs.aws.amazon.com/personalize/latest/dg/recommendations-batch.html#batch-data-upload)
must also be uploaded to the following lcoation:
```
@@ -269,7 +270,7 @@ batch/
│
└── /
└── /
- └── job_config.json
+ └── job_config.json
```
Batch inference output will be produced at the following location:
@@ -281,74 +282,395 @@ batch/
└── /
└── /
├── _CHECK
- └── job_config.json.out
+ └── job_config.json.out
+```
+
+Note: It is not recommended to use `performAutoML` as this feature will be deprecated in the future. Please take the time to select the most appropriate recipe for your use-case. If this parameter is used for this solution in the configuration, it will log an error and continue to build the solution without it. Please refer [FAQs](https://github.com/aws-samples/amazon-personalize-samples/blob/master/PersonalizeCheatSheet2.0.md) and [AWS Personalize Developer Guide](https://docs.aws.amazon.com/personalize/latest/dg/API_CreateSolution.html#personalize-CreateSolution-request-performAutoML).
+
+## Configuration with Tags
+
+You can also optionally supply tags in your configurations:
+
+```json
+{
+ "datasetGroup": {
+ "serviceConfig": {
+ "name": "dataset-group-name-2",
+ "tags": [
+ {
+ "tagKey": "dataset-group-key",
+ "tagValue": "dataset-group-value"
+ }
+ ]
+ }
+ },
+ "datasets": {
+ "serviceConfig": {
+ "importMode": "FULL",
+ "tags": [
+ {
+ "tagKey": "datasets-key",
+ "tagValue": "datasets-value"
+ }
+ ]
+ },
+ "interactions": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "interactions-data",
+ "tags": [
+ {
+ "tagKey": "interactions-dataset-key",
+ "tagValue": "interactions-dataset-value"
+ }
+ ]
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "interactions-schema",
+ "schema": {
+ "type": "record",
+ "name": "Interactions",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "USER_ID",
+ "type": "string"
+ },
+ {
+ "name": "ITEM_ID",
+ "type": "string"
+ },
+ {
+ "name": "TIMESTAMP",
+ "type": "long"
+ },
+ {
+ "name": "EVENT_TYPE",
+ "type": "string"
+ }
+ ],
+ "version": "1.0"
+ }
+ }
+ }
+ },
+ "items": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "items-data",
+ "tags": [
+ {
+ "tagKey": "items-dataset-key",
+ "tagValue": "items-dataset-value"
+ }
+ ]
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "items-schema",
+ "schema": {
+ "type": "record",
+ "name": "Items",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "ITEM_ID",
+ "type": "string"
+ },
+ {
+ "name": "GENRES",
+ "type": "string",
+ "categorical": true
+ },
+ {
+ "name": "YEAR",
+ "type": "int"
+ },
+ {
+ "name": "CREATION_TIMESTAMP",
+ "type": "long"
+ }
+ ],
+ "version": "1.0"
+ }
+ }
+ }
+ },
+ "users": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "users-data",
+ "tags": [
+ {
+ "tagKey": "users-dataset-key",
+ "tagValue": "users-dataset-value"
+ }
+ ]
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "users-schema",
+ "schema": {
+ "type": "record",
+ "name": "Users",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "USER_ID",
+ "type": "string"
+ },
+ {
+ "name": "GENDER",
+ "type": "string",
+ "categorical": true
+ }
+ ],
+ "version": "1.0"
+ }
+ }
+ }
+ }
+ },
+ "eventTracker": {
+ "serviceConfig": {
+ "name": "event-tracker-name",
+ "tags": [
+ {
+ "tagKey": "event-tracker-key",
+ "tagValue": "event-tracker-value"
+ }
+ ]
+ }
+ },
+ "solutions": [
+ {
+ "serviceConfig": {
+ "name": "solution-recommender-user-personalization",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-user-personalization",
+ "performHPO": true,
+ "tags": [
+ {
+ "tagKey": "solution-key",
+ "tagValue": "solution-value"
+ }
+ ],
+ "solutionVersion": {
+ "name": "solutionV1",
+ "trainingMode": "FULL",
+ "tags": [
+ {
+ "tagKey": "solution-version-key",
+ "tagValue": "solution-version-value"
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
```
-Note: It is not recommended to use `performAutoML` as this feature will be deprecated in the future. Please take the time to select the most appropriate recipe for the use-case and skip this feature. Refer [FAQs](https://github.com/aws-samples/amazon-personalize-samples/blob/master/PersonalizeCheatSheet2.0.md).
+Note: You cannot tag already created resources through the configuration. Only "FULL" `importMode` for datasets is currently supported.
-## Creating a custom build
-To customize the solution, follow the steps below:
+Tags can also be root-level tags and they apply to all components which do not have tags specified. For example, for the datasetGroup `dataset-group-name-3` specified below, `tagKey` "project" and `tagValue` "user-personalization" applies to `datasetGroup`, `interactions` dataset and its import, the `eventTracker`, and the `solutionVersion`, but the dataset-import gets the specified values for tagKey and tagValue as "datasets-key" and "datasets-value" respectively (applies for dataset imports of users, interactions and items datasets) and solution `solution-user-personalization` gets "solution-key" and "solution-value":
+
+```json
+{
+ "tags": [
+ {
+ "tagKey": "project",
+ "tagValue": "user-personalization"
+ }
+ ],
+ "datasetGroup": {
+ "serviceConfig": {
+ "name": "dataset-group-name-3"
+ }
+ },
+ "datasets": {
+ "serviceConfig": {
+ "importMode": "FULL",
+ "tags": [
+ {
+ "tagKey": "datasets-key",
+ "tagValue": "datasets-value"
+ }
+ ]
+ },
+ "interactions": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "interactions-data"
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "interactions-schema",
+ "schema": {
+ "type": "record",
+ "name": "Interactions",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "USER_ID",
+ "type": "string"
+ },
+ {
+ "name": "ITEM_ID",
+ "type": "string"
+ },
+ {
+ "name": "TIMESTAMP",
+ "type": "long"
+ },
+ {
+ "name": "EVENT_TYPE",
+ "type": "string"
+ }
+ ],
+ "version": "1.0"
+ }
+ }
+ }
+ }
+ },
+ "eventTracker": {
+ "serviceConfig": {
+ "name": "event-tracker-name"
+ }
+ },
+ "solutions": [
+ {
+ "serviceConfig": {
+ "name": "solution-user-personalization",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-user-personalization",
+ "performHPO": true,
+ "tags": [
+ {
+ "tagKey": "solution-key",
+ "tagValue": "solution-value"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+Some key points:
+
+1. Solution version tags can be specified inside the solution's `serviceConfig` field, inside the `solutionVersion` field (see the `dataset-group-name-2` example). `solutionVersion` specification is optional.
+2. Root-level tags apply to all components which do not have explicit tags specified (see notes around `dataset-group-name-2` json example).
+3. [`tags`](https://docs.aws.amazon.com/personalize/latest/dg/tagging-resources.html) are optional fields.
+
+## Training Mode
+
+Training mode can be described as 'FULL' or 'UPDATE' through the `solutionVersion` field inside the `solution` specification.
+
+The purpose of trainingMode="UPDATE" is to process new items added to the items dataset (via PutItems or a bulk upload) as well as impression data for new interactions added to the interactions since the last FULL/UPDATE training. The UPDATE mode only brings in new items and impression data and does not retrain the model. Therefore, if there are no dataset updates since the last FULL/UPDATE training, you might get an error saying "There should be updates to at least one dataset after last active solution version with training mode set to FULL".
+
+With User-Personalization, Amazon Personalize automatically updates the latest model (solution version) every two hours behind the scenes to include new data. There is no cost for automatic updates. The solution version must be deployed with an [Amazon Personalize campaign](https://docs.aws.amazon.com/personalize/latest/dg/campaigns.html) for updates to occur. Your campaign automatically uses the updated solution version. No new solution version is created when an auto update completes and no new model metrics are generated. This is because no full retraining occurs. If you create a new solution version, Amazon Personalize will not automatically update older solution versions, even if you have deployed them in a campaign. Updates also do not occur if you have deleted your dataset.
+
+If every two hours is not frequent enough, you can manually create a solution version with trainingMode set to UPDATE to include those new items in recommendations. Amazon Personalize automatically updates only your latest fully trained solution version, so the manually updated solution version won't be automatically updated in the future. Also note that if you create a solution version with UPDATE, you will be charged for the server hours to perform the update.
+
+For more information about automatic updates, see the [Amazon Personalize Developer Guide](https://docs.aws.amazon.com/personalize/latest/dg/native-recipe-new-item-USER_PERSONALIZATION.html#automatic-updates)
+
+```json
+...
+"solutions": [
+ {
+ "serviceConfig": {
+ "name": "affinity_item",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-item-affinity",
+ "solutionVersion": {
+ "trainingMode": "UPDATE"
+ "tags": [{"tagKey": "project", "tagValue": "item-affinity"}]
+ }
+ },
+ ...
+ }
+]
+...
+
+```
+
+## Creating a custom build
+
+To customize the solution, follow the steps below:
### Prerequisites
+
The following procedures assumes that all the OS-level configuration has been completed. They are:
-* [AWS Command Line Interface](https://aws.amazon.com/cli/)
-* [Python](https://www.python.org/) 3.9 or newer
-* [Node.js](https://nodejs.org/en/) 16.x or newer
-* [AWS CDK](https://aws.amazon.com/cdk/) 2.44.0 or newer
-* [Amazon Corretto OpenJDK](https://docs.aws.amazon.com/corretto/) 17.0.4.1
+- [AWS Command Line Interface](https://aws.amazon.com/cli/)
+- [Python](https://www.python.org/) 3.9 or newer
+- [Node.js](https://nodejs.org/en/) 16.x or newer
+- [AWS CDK](https://aws.amazon.com/cdk/) 2.44.0 or newer
+- [Amazon Corretto OpenJDK](https://docs.aws.amazon.com/corretto/) 17.0.4.1
> **Please ensure you test the templates before updating any production deployments.**
### 1. Download or clone this repo
+
```
+
git clone https://github.com/aws-solutions/maintaining-personalized-experiences-with-machine-learning
+
```
-### 2. Create a Python virtual environment for development
-```bash
-python -m virtualenv .venv
-source ./.venv/bin/activate
-cd ./source
-pip install -r requirements-dev.txt
+### 2. Create a Python virtual environment for development
+
+```bash
+python -m virtualenv .venv
+source ./.venv/bin/activate
+cd ./source
+pip install -r requirements-dev.txt
```
### 2. After introducing changes, run the unit tests to make sure the customizations don't break existing functionality
+
```bash
-pytest --cov
+pytest --cov
```
### 3. Build the solution for deployment
-#### Using AWS CDK (recommended)
+#### Using AWS CDK (recommended)
+
Packaging and deploying the solution with the AWS CDK allows for the most flexibility in development
-```bash
-cd ./source/infrastructure
+
+```bash
+cd ./source/infrastructure
# set environment variables required by the solution
export BUCKET_NAME="my-bucket-name"
-# bootstrap CDK (required once - deploys a CDK bootstrap CloudFormation stack for assets)
+# bootstrap CDK (required once - deploys a CDK bootstrap CloudFormation stack for assets)
cdk bootstrap --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess
-# build the solution
+# build the solution
cdk synth
-# build and deploy the solution
+# build and deploy the solution
cdk deploy
```
-#### Using the solution build tools
+#### Using the solution build tools
+
It is highly recommended to use the AWS CDK to deploy this solution (using the instructions above). While CDK is used to
develop the solution, to package the solution for release as a CloudFormation template, use the `build-s3-cdk-dist`
-build tool:
+build tool:
```bash
cd ./deployment
-export DIST_BUCKET_PREFIX=my-bucket-name
-export SOLUTION_NAME=my-solution-name
-export VERSION=my-version
+export DIST_BUCKET_PREFIX=my-bucket-name
+export SOLUTION_NAME=my-solution-name
+export VERSION=my-version
export REGION_NAME=my-region
build-s3-cdk-dist deploy \
@@ -362,30 +684,32 @@ build-s3-cdk-dist deploy \
```
**Parameter Details**
-- `$DIST_BUCKET_PREFIX` - The S3 bucket name prefix. A randomized value is recommended. You will need to create an
+
+- `$DIST_BUCKET_PREFIX` - The S3 bucket name prefix. A randomized value is recommended. You will need to create an
S3 bucket where the name is `-`. The solution's CloudFormation template will expect the
source code to be located in the bucket matching that name.
- `$SOLUTION_NAME` - The name of This solution (example: personalize-solution-customization)
- `$VERSION` - The version number to use (example: v0.0.1)
- `$REGION_NAME` - The region name to use (example: us-east-1)
-This will result in all global assets being pushed to the `DIST_BUCKET_PREFIX`, and all regional assets being pushed to
+This will result in all global assets being pushed to the `DIST_BUCKET_PREFIX`, and all regional assets being pushed to
`DIST_BUCKET_PREFIX-`. If your `REGION_NAME` is us-east-1, and the `DIST_BUCKET_PREFIX` is
-`my-bucket-name`, ensure that both `my-bucket-name` and `my-bucket-name-us-east-1` exist and are owned by you.
+`my-bucket-name`, ensure that both `my-bucket-name` and `my-bucket-name-us-east-1` exist and are owned by you.
After running the command, you can deploy the template:
-* Get the link of the `SOLUTION_NAME.template` uploaded to your Amazon S3 bucket
-* Deploy the solution to your account by launching a new AWS CloudFormation stack using the link of the template above.
+- Get the link of the `SOLUTION_NAME.template` uploaded to your Amazon S3 bucket
+- Deploy the solution to your account by launching a new AWS CloudFormation stack using the link of the template above.
> **Note:** `build-s3-cdk-dist` will use your current configured `AWS_REGION` and `AWS_PROFILE`. To set your defaults,
> install the [AWS Command Line Interface](https://aws.amazon.com/cli/) and run `aws configure`.
## Collection of operational metrics
+
This solution collects anonymous operational metrics to help AWS improve the quality of features of the solution.
For more information, including how to disable this capability, please see the [implementation guide](https://docs.aws.amazon.com/solutions/latest/maintaining-personalized-experiences-with-ml/collection-of-operational-metrics.html).
-
-***
+
+---
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
@@ -399,4 +723,4 @@ Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
-limitations under the License.
\ No newline at end of file
+limitations under the License.
diff --git a/source/aws_lambda/create_batch_inference_job/handler.py b/source/aws_lambda/create_batch_inference_job/handler.py
index 08cca79..b16eec7 100644
--- a/source/aws_lambda/create_batch_inference_job/handler.py
+++ b/source/aws_lambda/create_batch_inference_job/handler.py
@@ -62,6 +62,11 @@
"default": "omit",
"as": "iso8601",
},
+ "tags": {
+ "source": "event",
+ "path": "serviceConfig.tags",
+ "default": "omit",
+ },
}
logger = Logger()
diff --git a/source/aws_lambda/create_batch_segment_job/handler.py b/source/aws_lambda/create_batch_segment_job/handler.py
index e055405..f2c30df 100644
--- a/source/aws_lambda/create_batch_segment_job/handler.py
+++ b/source/aws_lambda/create_batch_segment_job/handler.py
@@ -57,6 +57,11 @@
"default": "omit",
"as": "iso8601",
},
+ "tags": {
+ "source": "event",
+ "path": "serviceConfig.tags",
+ "default": "omit",
+ },
}
logger = Logger()
diff --git a/source/aws_lambda/create_campaign/handler.py b/source/aws_lambda/create_campaign/handler.py
index 06109ab..3a5becd 100644
--- a/source/aws_lambda/create_campaign/handler.py
+++ b/source/aws_lambda/create_campaign/handler.py
@@ -51,6 +51,11 @@
"default": "omit",
"as": "iso8601",
},
+ "tags": {
+ "source": "event",
+ "path": "serviceConfig.tags",
+ "default": "omit",
+ },
}
logger = Logger()
diff --git a/source/aws_lambda/create_dataset/handler.py b/source/aws_lambda/create_dataset/handler.py
index 7c74d04..1f0f0b6 100644
--- a/source/aws_lambda/create_dataset/handler.py
+++ b/source/aws_lambda/create_dataset/handler.py
@@ -39,6 +39,11 @@
"default": "omit",
"as": "iso8601",
},
+ "tags": {
+ "source": "event",
+ "path": "serviceConfig.tags",
+ "default": "omit",
+ },
}
logger = Logger()
diff --git a/source/aws_lambda/create_dataset_group/handler.py b/source/aws_lambda/create_dataset_group/handler.py
index 20fcc21..a76fafc 100644
--- a/source/aws_lambda/create_dataset_group/handler.py
+++ b/source/aws_lambda/create_dataset_group/handler.py
@@ -46,6 +46,11 @@
"default": "omit",
"as": "iso8601",
},
+ "tags": {
+ "source": "event",
+ "path": "serviceConfig.tags",
+ "default": "omit",
+ },
}
tracer = Tracer()
diff --git a/source/aws_lambda/create_dataset_import_job/handler.py b/source/aws_lambda/create_dataset_import_job/handler.py
index fc08b24..fa08fd1 100644
--- a/source/aws_lambda/create_dataset_import_job/handler.py
+++ b/source/aws_lambda/create_dataset_import_job/handler.py
@@ -46,6 +46,17 @@
"default": "omit",
"as": "iso8601",
},
+ "importMode": {"source": "event", "path": "serviceConfig.importMode", "default": "omit"},
+ "tags": {
+ "source": "event",
+ "path": "serviceConfig.tags",
+ "default": "omit",
+ },
+ "publishAttributionMetricsToS3": {
+ "source": "event",
+ "path": "serviceConfig.publishAttributionMetricsToS3",
+ "default": "omit",
+ },
}
logger = Logger()
diff --git a/source/aws_lambda/create_event_tracker/handler.py b/source/aws_lambda/create_event_tracker/handler.py
index 9a87406..76f23e3 100644
--- a/source/aws_lambda/create_event_tracker/handler.py
+++ b/source/aws_lambda/create_event_tracker/handler.py
@@ -35,6 +35,11 @@
"default": "omit",
"as": "iso8601",
},
+ "tags": {
+ "source": "event",
+ "path": "serviceConfig.tags",
+ "default": "omit",
+ },
}
logger = Logger()
diff --git a/source/aws_lambda/create_filter/handler.py b/source/aws_lambda/create_filter/handler.py
index 473d098..9a56a49 100644
--- a/source/aws_lambda/create_filter/handler.py
+++ b/source/aws_lambda/create_filter/handler.py
@@ -39,6 +39,11 @@
"default": "omit",
"as": "iso8601",
},
+ "tags": {
+ "source": "event",
+ "path": "serviceConfig.tags",
+ "default": "omit",
+ },
}
logger = Logger()
diff --git a/source/aws_lambda/create_recommender/handler.py b/source/aws_lambda/create_recommender/handler.py
index 3e02880..b61127f 100644
--- a/source/aws_lambda/create_recommender/handler.py
+++ b/source/aws_lambda/create_recommender/handler.py
@@ -41,6 +41,11 @@
"default": "omit",
"as": "iso8601",
},
+ "tags": {
+ "source": "event",
+ "path": "serviceConfig.tags",
+ "default": "omit",
+ },
}
logger = Logger()
diff --git a/source/aws_lambda/create_solution/handler.py b/source/aws_lambda/create_solution/handler.py
index a458227..d9816bd 100644
--- a/source/aws_lambda/create_solution/handler.py
+++ b/source/aws_lambda/create_solution/handler.py
@@ -30,11 +30,6 @@
"path": "serviceConfig.performHPO",
"default": "omit",
},
- "performAutoML": {
- "source": "event",
- "path": "serviceConfig.performAutoML",
- "default": "omit",
- },
"recipeArn": {
"source": "event",
"path": "serviceConfig.recipeArn",
@@ -60,6 +55,11 @@
"default": "omit",
"as": "iso8601",
},
+ "tags": {
+ "source": "event",
+ "path": "serviceConfig.tags",
+ "default": "omit",
+ },
}
logger = Logger()
tracer = Tracer()
diff --git a/source/aws_lambda/create_solution_version/handler.py b/source/aws_lambda/create_solution_version/handler.py
index 594c5c8..3388389 100644
--- a/source/aws_lambda/create_solution_version/handler.py
+++ b/source/aws_lambda/create_solution_version/handler.py
@@ -47,6 +47,11 @@
"default": "omit",
"as": "iso8601",
},
+ "tags": {
+ "source": "event",
+ "path": "serviceConfig.tags",
+ "default": "omit",
+ },
}
logger = Logger()
tracer = Tracer()
diff --git a/source/aws_lambda/shared/personalize/service_model.py b/source/aws_lambda/shared/personalize/service_model.py
index 93bbf02..66cd0b4 100644
--- a/source/aws_lambda/shared/personalize/service_model.py
+++ b/source/aws_lambda/shared/personalize/service_model.py
@@ -133,7 +133,7 @@ def _filter(self, result: Dict) -> Dict:
result.pop("accountId", None)
result.pop("trackingId", None)
- # datset
+ # dataset
result.pop("datasetType", None)
# schema
diff --git a/source/aws_lambda/shared/personalize_service.py b/source/aws_lambda/shared/personalize_service.py
index a76e14d..bd96b9f 100644
--- a/source/aws_lambda/shared/personalize_service.py
+++ b/source/aws_lambda/shared/personalize_service.py
@@ -12,45 +12,45 @@
# ######################################################################################################################
import json
import re
+import time
from datetime import datetime
from pathlib import Path
-from typing import Callable, Dict, Optional, List, Union
+from typing import Callable, Dict, List, Optional, Union
import avro.schema
import botocore.exceptions
import jmespath
from aws_lambda_powertools import Logger, Metrics
from aws_lambda_powertools.metrics import MetricUnit, SchemaValidationError
-from botocore.stub import Stubber
-from dateutil.tz import tzlocal
-
from aws_solutions.core import (
- get_service_client,
+ get_aws_account,
get_aws_partition,
get_aws_region,
- get_aws_account,
+ get_service_client,
)
-from aws_solutions.scheduler.common import ScheduleError, Schedule
+from aws_solutions.scheduler.common import Schedule, ScheduleError
+from botocore.stub import Stubber
+from dateutil.tz import tzlocal
from shared.events import Notifies
from shared.exceptions import (
- ResourcePending,
- ResourceNeedsUpdate,
ResourceFailed,
+ ResourceNeedsUpdate,
+ ResourcePending,
SolutionVersionPending,
)
from shared.resource import (
- Resource,
+ BatchInferenceJob,
+ BatchSegmentJob,
+ Campaign,
Dataset,
- EventTracker,
DatasetGroup,
DatasetImportJob,
+ EventTracker,
+ Filter,
+ Resource,
+ Schema,
Solution,
SolutionVersion,
- BatchInferenceJob,
- BatchSegmentJob,
- Schema,
- Filter,
- Campaign,
)
from shared.s3 import S3
@@ -122,6 +122,12 @@ def describe(self, resource: Resource, **kwargs):
else:
return self.describe_default(resource, **kwargs)
+ def list_tags_for_resource(self, **kwargs):
+ logger.debug(f"listing tags for {kwargs}")
+ describe_fn_name = f"list_tags_for_resource"
+ describe_fn = getattr(self.cli, describe_fn_name)
+ return describe_fn(**kwargs)
+
def describe_default(self, resource: Resource, **kwargs):
"""
Describe a resource in Amazon Personalize by deriving its ARN from its name
@@ -164,6 +170,10 @@ def describe_with_update(self, resource: Resource, **kwargs):
kwargs = self._remove_workflow_parameters(resource, kwargs.copy())
result = self.describe_default(resource, **kwargs)
for k, v in kwargs.items():
+ # tags are not returned in any describe call
+ if k == "tags":
+ continue
+
received = result[resource.name.camel][k]
expected = v
@@ -487,11 +497,12 @@ class InputValidator:
@classmethod
def validate(cls, method: str, expected_params: Dict) -> None:
"""
- Validate an Amazon Personalize resource using the botocore stubber
+ Validate an Amazon Personalize resource config parameters using the botocore stubber
:return: None. Raises ParamValidationError if the InputValidator fails to validate
"""
cli = get_service_client("personalize")
func = getattr(cli, method)
+
with Stubber(cli) as stubber:
stubber.add_response(method, {}, expected_params)
func(**expected_params)
@@ -499,6 +510,7 @@ def validate(cls, method: str, expected_params: Dict) -> None:
class Configuration:
_schema = [
+ {"tags": []},
{
"datasetGroup": [
"serviceConfig",
@@ -512,6 +524,7 @@ class Configuration:
},
{
"datasets": [
+ "serviceConfig",
{
"users": [
{"dataset": ["serviceConfig"]},
@@ -580,35 +593,66 @@ def __init__(self):
self._configuration_errors = []
self.config_dict = {}
self.dataset_group = "UNKNOWN"
+ self.pass_root_tags = False
- def load(self, content: Union[Path, str]):
- if isinstance(content, Path):
- config_str = content.read_text(encoding="utf-8")
+ def load(self, content: Union[Path, str, dict]):
+ if isinstance(content, dict):
+ self.config_dict = content
else:
- config_str = content
+ if isinstance(content, Path):
+ config_str = content.read_text(encoding="utf-8")
+ else:
+ config_str = content
+ self.config_dict = self._decode(config_str)
- self.config_dict = self._decode(config_str)
+ self.pass_root_tags = jmespath.search("tags", self.config_dict)
def validate(self):
self._validate_not_empty()
self._validate_keys()
+ self._validate_root_tags()
+ self._validate_tags(
+ "datasetGroup.serviceConfig.tags",
+ "datasets.serviceConfig.tags",
+ "datasets.interactions.dataset.serviceConfig.tags",
+ "datasets.users.dataset.serviceConfig.tags",
+ "datasets.items.dataset.serviceConfig.tags",
+ "filters[].serviceConfig.tags",
+ "eventTracker.serviceConfig.tags",
+ "solutions[].serviceConfig.tags",
+ "solutions[].serviceConfig.solutionVersion.tags",
+ "solutions[].campaigns[].serviceConfig.tags",
+ "recommenders[].serviceConfig.tags",
+ "solutions[].batchInferenceJobs[].serviceConfig.tags",
+ "solutions[].batchSegmentJobs[].serviceConfig.tags",
+ )
self._validate_dataset_group()
self._validate_schemas()
self._validate_datasets()
+ self._validate_dataset_import_job()
self._validate_event_tracker()
self._validate_filters()
self._validate_solutions()
self._validate_solution_update()
+ self._validate_recommender()
self._validate_cron_expressions(
"datasetGroup.workflowConfig.schedules.import",
"solutions[].workflowConfig.schedules.full",
"solutions[].workflowConfig.schedules.update",
"solutions[].batchInferenceJobs[].workflowConfig.schedule",
)
+
self._validate_naming()
return len(self._configuration_errors) == 0
+ def config_dict_wdefaults(self):
+ self._validate_not_empty()
+ self._validate_dataset_import_job()
+ self._validate_solutions()
+ self._validate_solution_update()
+ return self.config_dict
+
@property
def errors(self) -> List[str]:
return self._configuration_errors
@@ -630,6 +674,7 @@ def _validate_resource(self, resource: Resource, expected_params):
try:
InputValidator.validate(f"create_{resource.name.snake}", expected_params)
+
except botocore.exceptions.ParamValidationError as exc:
self._configuration_errors.append(str(exc).replace("\n", " "))
@@ -641,6 +686,7 @@ def _validate_dataset_group(self, path="datasetGroup.serviceConfig"):
self._validate_resource(DatasetGroup(), dataset_group)
if isinstance(dataset_group, dict):
self.dataset_group = dataset_group.get("name", self.dataset_group)
+ self._fill_default_vals("datasetGroup", dataset_group)
def _validate_event_tracker(self, path="eventTracker.serviceConfig"):
event_tracker = jmespath.search(path, self.config_dict)
@@ -653,7 +699,9 @@ def _validate_event_tracker(self, path="eventTracker.serviceConfig"):
return
event_tracker["datasetGroupArn"] = DatasetGroup().arn("validation")
+
self._validate_resource(EventTracker(), event_tracker)
+ self._fill_default_vals("eventTracker", event_tracker)
def _validate_filters(self, path="filters[].serviceConfig"):
filters = jmespath.search(path, self.config_dict) or {}
@@ -663,6 +711,7 @@ def _validate_filters(self, path="filters[].serviceConfig"):
_filter["datasetGroupArn"] = DatasetGroup().arn("validation")
self._validate_resource(Filter(), _filter)
+ self._fill_default_vals("filter", _filter)
def _validate_type(self, var, typ, err: str):
validates = isinstance(var, typ)
@@ -702,11 +751,59 @@ def _validate_solutions(self, path="solutions[]"):
)
_solution = _solution.get("serviceConfig")
+
if not self._validate_type(_solution, dict, f"solutions[{idx}].serviceConfig must be an object"):
continue
+ # `performAutoML` is currently returned from InputValidator.validate() as a valid field
+ # Once the botocore Stubber is updated to not have this param anymore in `create_solution` call,
+ # this check can be deleted.
+ if "performAutoML" in _solution:
+ del _solution["performAutoML"]
+ logger.error(
+ "performAutoML is not a valid configuration parameter - proceeding to create the "
+ "solution without this feature. For more details, refer to the Maintaining Personalized Experiences "
+ "Github project's README.md file."
+ )
+
_solution["datasetGroupArn"] = DatasetGroup().arn("validation")
- self._validate_resource(Solution(), _solution)
+ if "solutionVersion" in _solution:
+ # To pass solution through InputValidator
+ solution_version_config = _solution["solutionVersion"]
+ del _solution["solutionVersion"]
+ self._validate_resource(Solution(), _solution)
+ _solution["solutionVersion"] = solution_version_config
+
+ else:
+ self._validate_resource(Solution(), _solution)
+
+ self._fill_default_vals("solution", _solution)
+ self._validate_solution_version(_solution)
+
+ def _validate_solution_version(self, solution_config):
+ allowed_sol_version_keys = ["trainingMode", "tags"]
+
+ if "solutionVersion" not in solution_config:
+ solution_config["solutionVersion"] = {}
+ else:
+ keys_not_allowed = set(solution_config["solutionVersion"].keys()) - set(allowed_sol_version_keys)
+ if keys_not_allowed != set():
+ self._configuration_errors.append(
+ f"Allowed keys for solutionVersion are: {allowed_sol_version_keys}. Unsupported key(s): {list(keys_not_allowed)}"
+ )
+
+ self._fill_default_vals("solutionVersion", solution_config["solutionVersion"])
+
+ def _validate_recommender(self, path="recommenders[]"):
+ recommenders = jmespath.search(path, self.config_dict) or {}
+ for idx, recommender_config in enumerate(recommenders):
+ if not self._validate_type(
+ recommender_config, dict, f"recommenders[{idx}].serviceConfig must be an object"
+ ):
+ continue
+
+ _recommender = recommender_config.get("serviceConfig")
+ self._fill_default_vals("recommender", _recommender)
def _validate_solution_update(self):
invalid = (
@@ -721,21 +818,6 @@ def _validate_solution_update(self):
f"solution {solution_name} does not support solution version incremental updates - please use `full` instead of `update`."
)
- def _validate_solution_versions(self, path: str, solution_versions: List[Dict]):
- for idx, solution_version_config in enumerate(solution_versions):
- current_path = f"{path}.solutionVersions[{idx}]"
-
- solution_version = solution_version_config.get("solutionVersion")
- if not self._validate_type(
- solution_version,
- dict,
- f"{current_path}.solutionVersion must be an object",
- ):
- continue
- else:
- solution_version["solutionArn"] = Solution().arn("validation")
- self._validate_resource(SolutionVersion(), solution_version)
-
def _validate_campaigns(self, path, campaigns: List[Dict]):
for idx, campaign_config in enumerate(campaigns):
current_path = f"{path}.campaigns[{idx}]"
@@ -747,6 +829,8 @@ def _validate_campaigns(self, path, campaigns: List[Dict]):
campaign["solutionVersionArn"] = SolutionVersion().arn("validation")
self._validate_resource(Campaign(), campaign)
+ self._fill_default_vals("campaign", campaign)
+
def _validate_batch_inference_jobs(self, path, solution_name, batch_inference_jobs: List[Dict]):
for idx, batch_job_config in enumerate(batch_inference_jobs):
current_path = f"{path}.batchInferenceJobs[{idx}]"
@@ -773,6 +857,7 @@ def _validate_batch_inference_jobs(self, path, solution_name, batch_inference_jo
}
)
self._validate_resource(BatchInferenceJob(), batch_job)
+ self._fill_default_vals("batchJob", batch_job)
def _validate_batch_segment_jobs(self, path, solution_name, batch_segment_jobs: List[Dict]):
for idx, batch_job_config in enumerate(batch_segment_jobs):
@@ -800,6 +885,7 @@ def _validate_batch_segment_jobs(self, path, solution_name, batch_segment_jobs:
}
)
self._validate_resource(BatchSegmentJob(), batch_job)
+ self._fill_default_vals("segmentJob", batch_job)
def _validate_rate(self, expression):
rate_re = re.compile(r"rate\((?P\d+) (?P(minutes?|hours?|day?s)\))")
@@ -867,7 +953,6 @@ def _validate_datasets(self) -> None:
return
# some values are provided by the solution - we introduce placeholders
- SolutionVersion().arn("validation")
dataset.update(
{
"datasetGroupArn": DatasetGroup().arn("validation"),
@@ -876,6 +961,20 @@ def _validate_datasets(self) -> None:
}
)
self._validate_resource(Dataset(), dataset)
+ self._fill_default_vals("dataset", dataset)
+
+ def _validate_dataset_import_job(self, path="datasets.serviceConfig") -> None:
+ """
+ Perform a validation of the dataset import fields to ensure default values are present
+ :return: None
+ """
+ dataset_import = jmespath.search(path, self.config_dict)
+ if "datasets" in self.config_dict:
+ if not dataset_import:
+ self.config_dict["datasets"]["serviceConfig"] = {}
+ dataset_import = jmespath.search(path, self.config_dict)
+
+ self._fill_default_vals("datasetImport", dataset_import)
def _validate_schemas(self) -> None:
"""
@@ -938,6 +1037,43 @@ def _validate_keys(self, config: Dict = None, schema: List = None, path=""):
else:
self._configuration_errors.append(f"an unknown validation error occurred at {path}")
+ def _validate_tag_types(self, result, path):
+ err_msg = f"Invalid type at path {path} for tags, expected list[dict]."
+ is_lst = self._validate_type(result, list, err_msg)
+ if isinstance(result, list) and isinstance(result[0], list): # sometimes jmespath returns list of list instead
+ result = result[0]
+
+ if is_lst:
+ for tag_instance in result:
+ is_dict = self._validate_type(tag_instance, dict, err_msg)
+ if path == "root":
+ if is_dict and set(tag_instance.keys()) == {"tagKey", "tagValue"}:
+ continue
+ else:
+ self._configuration_errors.append(
+ "Parameter validation failed: Tag keys must be one of: 'tagKey', 'tagValue'"
+ )
+ return False
+ else:
+ return is_dict
+ return False
+
+ def _validate_root_tags(self):
+ if "tags" in self.config_dict:
+ self._validate_tag_types(self.config_dict["tags"], "root")
+
+ def _validate_tags(self, *paths: List[str]):
+ """
+ Validate the configuration in config_dict for all tags provided.
+ Ensures that the tags supplied are a list of dict, and only contain the allowed key values.
+ :param paths: The paths in config_dict (used in recursion to identify a jmespath path) that may contain tags
+ :return: None
+ """
+ for path in paths:
+ result = jmespath.search(path, self.config_dict)
+ if result:
+ self._validate_tag_types(result, path)
+
def _validate_list(self, config: List, schema: List, path=""):
for idx, item in enumerate(config):
current_path = f"{path}[{idx}]"
@@ -971,3 +1107,43 @@ def _validate_naming(self):
"""Validate that names of resources don't overlap in ways that might cause issues"""
self._validate_no_duplicates(name="campaign names", path="solutions[].campaigns[].serviceConfig.name")
self._validate_no_duplicates(name="solution names", path="solutions[].serviceConfig.name")
+
+ def _fill_default_vals(self, resource_type, resource_dict):
+ """Insert default values for tags and other fields whenever not supplied"""
+
+ if (
+ resource_type
+ in [
+ "datasetGroup",
+ "datasetImport",
+ "dataset",
+ "eventTracker",
+ "solution",
+ "solutionVersion",
+ "filter",
+ "recommender",
+ "campaign",
+ "batchJob",
+ "segmentJob",
+ ]
+ and "tags" not in resource_dict
+ ):
+ if self.pass_root_tags:
+ resource_dict["tags"] = self.config_dict["tags"]
+ else:
+ resource_dict["tags"] = []
+
+ if resource_type == "datasetImport":
+ if "importMode" not in resource_dict:
+ resource_dict["importMode"] = "FULL"
+ if "publishAttributionMetricsToS3" not in resource_dict:
+ resource_dict["publishAttributionMetricsToS3"] = False
+
+ if resource_type == "solutionVersion":
+ if "tags" not in resource_dict:
+ if self.pass_root_tags:
+ resource_dict["tags"] = self.config_dict["tags"]
+ else:
+ resource_dict["tags"] = []
+ if "trainingMode" not in resource_dict:
+ resource_dict["trainingMode"] = "FULL"
diff --git a/source/aws_lambda/shared/s3.py b/source/aws_lambda/shared/s3.py
index bbbc1b1..44ece2a 100644
--- a/source/aws_lambda/shared/s3.py
+++ b/source/aws_lambda/shared/s3.py
@@ -60,6 +60,7 @@ def _exists_one(self):
return True
def _exists_any(self):
+ latest = None
try:
bucket = self.cli.Bucket(self.bucket)
objects = [
diff --git a/source/aws_lambda/shared/sfn_middleware.py b/source/aws_lambda/shared/sfn_middleware.py
index 95c7955..245430a 100644
--- a/source/aws_lambda/shared/sfn_middleware.py
+++ b/source/aws_lambda/shared/sfn_middleware.py
@@ -19,22 +19,21 @@
from dataclasses import dataclass, field
from enum import Enum, auto
from pathlib import Path
-from typing import Dict, Any, Callable, Optional, List, Union
+from typing import Any, Callable, Dict, List, Optional, Union
from uuid import uuid4
import jmespath
from aws_lambda_powertools import Logger
-from dateutil.parser import isoparse
-
from aws_solutions.core import get_service_client
+from dateutil.parser import isoparse
from shared.date_helpers import parse_datetime
from shared.exceptions import (
- ResourcePending,
- ResourceInvalid,
ResourceFailed,
+ ResourceInvalid,
ResourceNeedsUpdate,
+ ResourcePending,
)
-from shared.personalize_service import Personalize
+from shared.personalize_service import Configuration, Personalize
from shared.resource import get_resource
logger = Logger()
@@ -48,10 +47,7 @@
STATUS_FAILED = "CREATE FAILED"
STATUS_ACTIVE = "ACTIVE"
-WORKFLOW_PARAMETERS = {
- "maxAge",
- "timeStarted",
-}
+WORKFLOW_PARAMETERS = {"maxAge", "timeStarted"}
WORKFLOW_CONFIG_DEFAULT = {"timeStarted": datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ")}
@@ -86,13 +82,15 @@ def set_workflow_config(config: Dict) -> Dict:
"batchSegmentJobs": Arity.MANY,
"filters": Arity.MANY,
"solutionVersion": Arity.ONE,
+ "tags": Arity.MANY,
}
# Note: schema creation notification is not supported at this time
# Note: dataset, dataset import job, event tracker notifications are added in the workflow
- for k, v in config.items():
- if k in {"serviceConfig", "workflowConfig", "bucket", "currentDate"}:
- pass # do not modify any serviceConfig keys
+ for k in config:
+ v = config[k]
+ if k in {"serviceConfig", "workflowConfig", "bucket", "currentDate", "tags"}:
+ pass # do not modify any serviceConfig keys/tags
elif k in resources.keys() and resources[k] == Arity.ONE:
config[k].setdefault("workflowConfig", {})
config[k]["workflowConfig"] |= WORKFLOW_CONFIG_DEFAULT
@@ -104,6 +102,10 @@ def set_workflow_config(config: Dict) -> Dict:
else:
config[k] = set_workflow_config(config[k]) if config[k] else config[k]
+ cfg = Configuration()
+ cfg.load(config)
+ config = cfg.config_dict_wdefaults()
+
return config
@@ -264,11 +266,16 @@ def check_status(self, resource: Dict[str, Any], **expected) -> Dict: # NOSONAR
actual_value = actual_value.lower()
expected_value = expected_value.lower()
+ if expected_key == "tags":
+ continue
+
# some parameters don't require checking:
if self.resource == "datasetImportJob" and expected_key in {
"jobName",
"dataSource",
"roleArn",
+ "importMode",
+ "publishAttributionMetricsToS3",
}:
continue
if self.resource.startswith("batch") and expected_key in {
@@ -278,8 +285,16 @@ def check_status(self, resource: Dict[str, Any], **expected) -> Dict: # NOSONAR
"roleArn",
}:
continue
- if self.resource == "solutionVersion" and expected_key == "trainingMode":
- continue
+
+ if self.resource == "solutionVersion":
+ if expected_key == "trainingMode":
+ continue
+ if expected_key == "name":
+ if "/" in actual_value: # user provided name.
+ actual_value = actual_value.split("/")[-1]
+ if "solution_" in actual_value: # name was auto-generated as default value.
+ continue
+
if expected_key in WORKFLOW_PARAMETERS:
continue
diff --git a/source/cdk_solution_helper_py/README.md b/source/cdk_solution_helper_py/README.md
index 4c0762a..4f6abcd 100644
--- a/source/cdk_solution_helper_py/README.md
+++ b/source/cdk_solution_helper_py/README.md
@@ -1,45 +1,46 @@
# CDK Solution Helper for Python and CDK
+
## Infrastructure Deployment Tooling
-This tooling helps you develop new AWS Solutions using the AWS CDK with an approach that is compatible with the
-current AWS Solutions build pipeline.
-
-This README summarizes using the tool.
+This tooling helps you develop new AWS Solutions using the AWS CDK with an approach that is compatible with the
+current AWS Solutions build pipeline.
+
+This README summarizes using the tool.
## Prerequisites
Install this package. It requires at least
-- Python 3.7
-- AWS CDK version 2.7.0 or higher
+- Python 3.9
+- AWS CDK version 2.44.0 or higher
-To install the packages:
+To install the packages:
```
pip install /cdk_solution_helper_py/helpers_cdk # where is the path to the solution helper
-pip install /cdk_solution_helper_py/helpers_common # where is the path to the solution helper
+pip install /cdk_solution_helper_py/helpers_common # where is the path to the solution helper
```
-
+
## 1. Create a new CDK application
```shell script
-mkdir -p your_solution_name/deployment
+mkdir -p your_solution_name/deployment
mkdir -p your_solution_name/source-infrastructure
cd your_solution_name/source/infrastructure
cdk init app --language=python .
```
-## 2. Install the package
+## 2. Install the package
```
cd your_solution_name
-virtualenv .venv
+virtualenv .venv
source ./.venv/bin/activate
pip install /cdk_solution_helper_py/helpers_cdk # where is the path to the solution helper
pip install /cdk_solution_helper_py/helpers_common # where is the path to the solution helper
```
-# 3. Write CDK code using the helpers
+# 3. Write CDK code using the helpers
This might be a file called `app.py` in your CDK application directory
@@ -77,7 +78,7 @@ logger = logging.getLogger("cdk-helper")
solution = CDKSolution(cdk_json_path=Path(__file__).parent.absolute() / "cdk.json")
-# Inherit from SolutionStack to create a CDK app compatible with AWS Solutions
+# Inherit from SolutionStack to create a CDK app compatible with AWS Solutions
class MyStack(SolutionStack):
def __init__(self, scope: Construct, construct_id: str, description: str, template_filename, **kwargs):
super().__init__(scope, construct_id, description, template_filename, **kwargs)
@@ -100,8 +101,8 @@ class MyStack(SolutionStack):
# add any custom metrics to your stack!
self.metrics.update({"your_custom_metric": "your_custom_metric_value"})
-
- # example of adding an AWS Lambda function for Python
+
+ # example of adding an AWS Lambda function for Python
SolutionsPythonFunction(
self,
"ExampleLambdaFunction",
@@ -138,15 +139,14 @@ if __name__ == "__main__":
result = build_app()
```
-
## 4. Build the solution for deployment
You can use the [AWS CDK](https://aws.amazon.com/cdk/) to deploy the solution directly
```shell script
-# install the Python dependencies
-cd
-virtualenv .venv
+# install the Python dependencies
+cd
+virtualenv .venv
source .venv/bin/activate
pip install -r source/requirements-build-and-test.txt
@@ -156,22 +156,22 @@ cd source/infrastructure
# set environment variables required by the solution - use your own bucket name here
export BUCKET_NAME="placeholder"
-# bootstrap CDK (required once - deploys a CDK bootstrap CloudFormation stack for assets)
+# bootstrap CDK (required once - deploys a CDK bootstrap CloudFormation stack for assets)
cdk bootstrap --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess
# deploy with CDK
cdk deploy
-#
+#
```
At this point, the stack will be built and deployed using CDK - the template will take on default CloudFormation
parameter values. To modify the stack parameters, you can use the `--parameters` flag in CDK deploy - for example:
```shell script
-cdk deploy --parameters [...]
+cdk deploy --parameters [...]
```
-## 5. Package the solution for release
+## 5. Package the solution for release
It is highly recommended to use CDK to deploy this solution (see step #1 above). While CDK is used to develop the
solution, to package the solution for release as a CloudFormation template use the `build-s3-cdk-dist` script:
@@ -183,55 +183,55 @@ export DIST_OUTPUT_BUCKET=my-bucket-name
export SOLUTION_NAME=my-solution-name
export VERSION=my-version
-build-s3-cdk-dist deploy --source-bucket-name $DIST_OUTPUT_BUCKET --solution-name $SOLUTION_NAME --version-code $VERSION --cdk-app-path ../source/infrastructure/app.py --cdk-app-entrypoint app:build_app --sync
+build-s3-cdk-dist deploy --source-bucket-name $DIST_OUTPUT_BUCKET --solution-name $SOLUTION_NAME --version-code $VERSION --cdk-app-path ../source/infrastructure/app.py --cdk-app-entrypoint app:build_app --sync
```
> **Note**: `build-s3-cdk-dist` will use your current configured `AWS_REGION` and `AWS_PROFILE`. To set your defaults
-install the [AWS Command Line Interface](https://aws.amazon.com/cli/) and run `aws configure`.
+> install the [AWS Command Line Interface](https://aws.amazon.com/cli/) and run `aws configure`.
#### Parameter Details:
-
+
- `$DIST_OUTPUT_BUCKET` - This is the global name of the distribution. For the bucket name, the AWS Region is added to
-the global name (example: 'my-bucket-name-us-east-1') to create a regional bucket. The lambda artifact should be
-uploaded to the regional buckets for the CloudFormation template to pick it up for deployment.
+ the global name (example: 'my-bucket-name-us-east-1') to create a regional bucket. The lambda artifact should be
+ uploaded to the regional buckets for the CloudFormation template to pick it up for deployment.
- `$SOLUTION_NAME` - The name of This solution (example: your-solution-name)
- `$VERSION` - The version number of the change
-> **Notes**: The `build_s3_cdk_dist` script expects the bucket name as one of its parameters, and this value should
-not include the region suffix. See below on how to create the buckets expected by this solution:
->
-> The `SOLUTION_NAME`, and `VERSION` variables might also be defined in the `cdk.json` file.
+> **Notes**: The `build_s3_cdk_dist` script expects the bucket name as one of its parameters, and this value should
+> not include the region suffix. See below on how to create the buckets expected by this solution:
+>
+> The `SOLUTION_NAME`, and `VERSION` variables might also be defined in the `cdk.json` file.
## 3. Upload deployment assets to yur Amazon S3 buckets
Create the CloudFormation bucket defined above, as well as a regional bucket in the region you wish to deploy. The
CloudFormation template is configured to pull the Lambda deployment packages from Amazon S3 bucket in the region the
template is being launched in. Create a bucket in the desired region with the region name appended to the name of the
-bucket. eg: for us-east-1 create a bucket named: ```my-bucket-us-east-1```.
+bucket. eg: for us-east-1 create a bucket named: `my-bucket-us-east-1`.
For example:
-```bash
+```bash
aws s3 mb s3://my-bucket-name --region us-east-1
aws s3 mb s3://my-bucket-name-us-east-1 --region us-east-1
```
-Copy the built S3 assets to your S3 buckets:
+Copy the built S3 assets to your S3 buckets:
```
use the --sync option of build-s3-cdk-dist to upload the global and regional assets
```
-> **Notes**: Choose your desired region by changing region in the above example from us-east-1 to your desired region
-of the S3 buckets.
+> **Notes**: Choose your desired region by changing region in the above example from us-east-1 to your desired region
+> of the S3 buckets.
## 4. Launch the CloudFormation template
-* Get the link of `your-solution-name.template` uploaded to your Amazon S3 bucket.
-* Deploy the solution to your account by launching a new AWS CloudFormation stack using the link of the
-`your-solution-name.template`.
-
-***
+- Get the link of `your-solution-name.template` uploaded to your Amazon S3 bucket.
+- Deploy the solution to your account by launching a new AWS CloudFormation stack using the link of the
+ `your-solution-name.template`.
+
+---
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
@@ -245,4 +245,4 @@ Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
-limitations under the License.
\ No newline at end of file
+limitations under the License.
diff --git a/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/java/bundling.py b/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/java/bundling.py
index f613a09..242074a 100644
--- a/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/java/bundling.py
+++ b/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/java/bundling.py
@@ -90,7 +90,6 @@ def _invoke_local_command(
cwd: Union[str, Path, None] = None,
return_stdout: bool = False,
):
-
cwd = Path(cwd)
rv = ""
diff --git a/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/layers/aws_lambda_powertools/requirements/requirements.txt b/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/layers/aws_lambda_powertools/requirements/requirements.txt
index c3edbd4..1bb43d3 100644
--- a/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/layers/aws_lambda_powertools/requirements/requirements.txt
+++ b/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/layers/aws_lambda_powertools/requirements/requirements.txt
@@ -1,2 +1,2 @@
-aws-lambda-powertools==1.29.2
+aws-lambda-powertools==2.10.0
aws-xray-sdk==2.11.0
\ No newline at end of file
diff --git a/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/python/bundling.py b/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/python/bundling.py
index e3e9276..c86985c 100644
--- a/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/python/bundling.py
+++ b/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/python/bundling.py
@@ -25,7 +25,7 @@
from aws_solutions.cdk.helpers import copytree
-DEFAULT_RUNTIME = Runtime.PYTHON_3_7
+DEFAULT_RUNTIME = Runtime.PYTHON_3_9
BUNDLER_DEPENDENCIES_CACHE = "/var/dependencies"
REQUIREMENTS_TXT_FILE = "requirements.txt"
REQUIREMENTS_PIPENV_FILE = "Pipfile"
diff --git a/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/python/function.py b/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/python/function.py
index ed497fa..7c66898 100644
--- a/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/python/function.py
+++ b/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/aws_lambda/python/function.py
@@ -27,7 +27,7 @@
from aws_solutions.cdk.aws_lambda.python.bundling import SolutionsPythonBundling
from aws_solutions.cdk.aws_lambda.python.hash_utils import DirectoryHash
-DEFAULT_RUNTIME = Runtime.PYTHON_3_7
+DEFAULT_RUNTIME = Runtime.PYTHON_3_9
DEPENDENCY_EXCLUDES = ["*.pyc"]
@@ -62,7 +62,7 @@ def __init__(
if not kwargs.get("role"):
kwargs["role"] = self._create_role()
- # python 3.7 is selected to support custom resources and inline code
+ # python 3.9 is selected to support custom resources and inline code
if not kwargs.get("runtime"):
kwargs["runtime"] = DEFAULT_RUNTIME
diff --git a/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/synthesizers.py b/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/synthesizers.py
index 630419b..019cec7 100644
--- a/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/synthesizers.py
+++ b/source/cdk_solution_helper_py/helpers_cdk/aws_solutions/cdk/synthesizers.py
@@ -68,7 +68,7 @@ def delete_bootstrap_parameters(self):
def delete_cdk_helpers(self):
"""Remove the CDK bucket deployment helpers, since solutions don't have a bootstrap bucket."""
to_delete = []
- for (resource_name, resource) in self.contents.get("Resources", {}).items():
+ for resource_name, resource in self.contents.get("Resources", {}).items():
if "Custom::CDKBucketDeployment" in resource["Type"]:
to_delete.append(resource_name)
if "CDKBucketDeployment" in resource_name:
@@ -89,7 +89,7 @@ def patch_nested(self):
]
},
)
- for (resource_name, resource) in self.contents.get("Resources", {}).items():
+ for resource_name, resource in self.contents.get("Resources", {}).items():
resource_type = resource.get("Type")
if resource_type == "AWS::CloudFormation::Stack":
try:
@@ -120,7 +120,7 @@ def patch_nested(self):
def patch_lambda(self):
"""Patch the lambda functions for S3 deployment compatibility"""
- for (resource_name, resource) in self.contents.get("Resources", {}).items():
+ for resource_name, resource in self.contents.get("Resources", {}).items():
resource_type = resource.get("Type")
if resource_type == "AWS::Lambda::Function" or resource_type == "AWS::Lambda::LayerVersion":
logger.info(f"{resource_name} ({resource_type}) patching")
@@ -190,7 +190,7 @@ def patch_lambda(self):
def patch_app_reg(self):
"""Patch the App Registry Info"""
- for (resource_name, resource) in self.contents.get("Resources", {}).items():
+ for resource_name, resource in self.contents.get("Resources", {}).items():
resource_type = resource.get("Type")
if resource_type == "AWS::ApplicationInsights::Application":
logger.info(f"{resource_name} ({resource_type}) patching")
@@ -248,7 +248,7 @@ def save(self, asset_path_global: Path = None, asset_path_regional: Path = None)
str(asset_path.joinpath(self.global_asset_name)),
)
- # regional solutions assets - default folder location is "regional-s3-assets"
+ # the regional solutions assets - default folder location is "regional-s3-assets"
if asset_path_regional:
asset_path = self._build_asset_path(asset_path_regional)
for asset in self.assets_regional:
diff --git a/source/cdk_solution_helper_py/helpers_cdk/setup.py b/source/cdk_solution_helper_py/helpers_cdk/setup.py
index 04cc93f..8211189 100644
--- a/source/cdk_solution_helper_py/helpers_cdk/setup.py
+++ b/source/cdk_solution_helper_py/helpers_cdk/setup.py
@@ -52,7 +52,7 @@ def get_version():
"pip>=22.3.1",
"aws_cdk_lib==2.44.0",
"Click==8.1.3",
- "boto3==1.25.5",
+ "boto3==1.26.47",
"requests==2.28.1",
"crhelper==2.0.11",
],
diff --git a/source/cdk_solution_helper_py/helpers_common/setup.py b/source/cdk_solution_helper_py/helpers_common/setup.py
index 2266334..a89ffad 100644
--- a/source/cdk_solution_helper_py/helpers_common/setup.py
+++ b/source/cdk_solution_helper_py/helpers_common/setup.py
@@ -42,7 +42,7 @@ def get_version():
license="Apache License 2.0",
packages=setuptools.find_namespace_packages(exclude=["build*"]),
install_requires=[
- "boto3==1.25.5",
+ "boto3==1.26.47",
"pip>=22.3.1",
],
python_requires=">=3.9",
diff --git a/source/cdk_solution_helper_py/requirements-dev.txt b/source/cdk_solution_helper_py/requirements-dev.txt
index 10fdc74..38f633e 100644
--- a/source/cdk_solution_helper_py/requirements-dev.txt
+++ b/source/cdk_solution_helper_py/requirements-dev.txt
@@ -1,16 +1,16 @@
aws_cdk_lib==2.44.0
aws-cdk.aws-servicecatalogappregistry-alpha==2.44.0a0
black
-boto3==1.25.5
+boto3==1.26.47
requests==2.28.1
crhelper==2.0.11
Click
moto
pipenv
poetry
-pytest
+pytest>=7.2.0
pytest-cov>=4.0.0
-pytest-mock>=3.9.0
+pytest-mock>=3.10.0
tox
tox-pyenv
-e ./source/cdk_solution_helper_py/helpers_cdk
diff --git a/source/images/solution-architecture.png b/source/images/solution-architecture.png
new file mode 100644
index 0000000..ae0fe77
Binary files /dev/null and b/source/images/solution-architecture.png differ
diff --git a/source/infrastructure/cdk.json b/source/infrastructure/cdk.json
index 3ad87bd..ce6dfbc 100644
--- a/source/infrastructure/cdk.json
+++ b/source/infrastructure/cdk.json
@@ -3,8 +3,8 @@
"context": {
"SOLUTION_NAME": "Maintaining Personalized Experiences with Machine Learning",
"SOLUTION_ID": "SO0170",
- "SOLUTION_VERSION": "v1.3.1",
+ "SOLUTION_VERSION": "v1.4.0",
"APP_REGISTRY_NAME": "personalized-experiences-ML",
"APPLICATION_TYPE": "AWS-Solutions"
}
-}
+}
\ No newline at end of file
diff --git a/source/infrastructure/deploy.py b/source/infrastructure/deploy.py
index 5645787..6f1f4d4 100644
--- a/source/infrastructure/deploy.py
+++ b/source/infrastructure/deploy.py
@@ -17,10 +17,9 @@
from pathlib import Path
import aws_cdk as cdk
-
+from aspects.app_registry import AppRegistry
from aws_solutions.cdk import CDKSolution
from personalize.stack import PersonalizeStack
-from aspects.app_registry import AppRegistry
logger = logging.getLogger("cdk-helper")
solution = CDKSolution(cdk_json_path=Path(__file__).parent.absolute() / "cdk.json")
diff --git a/source/infrastructure/personalize/aws_lambda/functions/create_batch_inference_job.py b/source/infrastructure/personalize/aws_lambda/functions/create_batch_inference_job.py
index 09cfea6..ef876b2 100644
--- a/source/infrastructure/personalize/aws_lambda/functions/create_batch_inference_job.py
+++ b/source/infrastructure/personalize/aws_lambda/functions/create_batch_inference_job.py
@@ -84,6 +84,8 @@ def _set_permissions(self):
"personalize:DescribeBatchInferenceJob",
"personalize:DescribeSolution",
"personalize:DescribeSolutionVersion",
+ "personalize:TagResource",
+ "personalize:ListTagsForResource",
],
effect=iam.Effect.ALLOW,
resources=[
diff --git a/source/infrastructure/personalize/aws_lambda/functions/create_batch_segment_job.py b/source/infrastructure/personalize/aws_lambda/functions/create_batch_segment_job.py
index 4cb4510..99b62fc 100644
--- a/source/infrastructure/personalize/aws_lambda/functions/create_batch_segment_job.py
+++ b/source/infrastructure/personalize/aws_lambda/functions/create_batch_segment_job.py
@@ -84,6 +84,8 @@ def _set_permissions(self):
"personalize:DescribeBatchSegmentJob",
"personalize:DescribeSolution",
"personalize:DescribeSolutionVersion",
+ "personalize:TagResource",
+ "personalize:ListTagsForResource",
],
effect=iam.Effect.ALLOW,
resources=[
diff --git a/source/infrastructure/personalize/aws_lambda/functions/create_campaign.py b/source/infrastructure/personalize/aws_lambda/functions/create_campaign.py
index 67aba1c..7542aeb 100644
--- a/source/infrastructure/personalize/aws_lambda/functions/create_campaign.py
+++ b/source/infrastructure/personalize/aws_lambda/functions/create_campaign.py
@@ -44,6 +44,8 @@ def _set_permissions(self):
"personalize:ListCampaigns",
"personalize:DescribeCampaign",
"personalize:UpdateCampaign",
+ "personalize:TagResource",
+ "personalize:ListTagsForResource",
],
effect=iam.Effect.ALLOW,
resources=[
diff --git a/source/infrastructure/personalize/aws_lambda/functions/create_dataset.py b/source/infrastructure/personalize/aws_lambda/functions/create_dataset.py
index 3013068..a45b9d5 100644
--- a/source/infrastructure/personalize/aws_lambda/functions/create_dataset.py
+++ b/source/infrastructure/personalize/aws_lambda/functions/create_dataset.py
@@ -47,6 +47,8 @@ def _set_permissions(self):
"personalize:CreateDataset",
"personalize:DescribeDataset",
"personalize:ListDatasets",
+ "personalize:TagResource",
+ "personalize:ListTagsForResource",
],
effect=iam.Effect.ALLOW,
resources=[
diff --git a/source/infrastructure/personalize/aws_lambda/functions/create_dataset_group.py b/source/infrastructure/personalize/aws_lambda/functions/create_dataset_group.py
index af4483e..117349f 100644
--- a/source/infrastructure/personalize/aws_lambda/functions/create_dataset_group.py
+++ b/source/infrastructure/personalize/aws_lambda/functions/create_dataset_group.py
@@ -128,6 +128,8 @@ def _set_permissions(self):
actions=[
"personalize:DescribeDatasetGroup",
"personalize:CreateDatasetGroup",
+ "personalize:TagResource",
+ "personalize:ListTagsForResource",
],
effect=iam.Effect.ALLOW,
resources=[f"arn:{Aws.PARTITION}:personalize:{Aws.REGION}:{Aws.ACCOUNT_ID}:dataset-group/*"],
diff --git a/source/infrastructure/personalize/aws_lambda/functions/create_dataset_import_job.py b/source/infrastructure/personalize/aws_lambda/functions/create_dataset_import_job.py
index 9668670..c1803a9 100644
--- a/source/infrastructure/personalize/aws_lambda/functions/create_dataset_import_job.py
+++ b/source/infrastructure/personalize/aws_lambda/functions/create_dataset_import_job.py
@@ -92,6 +92,8 @@ def _set_permissions(self):
"personalize:CreateDatasetImportJob",
"personalize:DescribeDatasetImportJob",
"personalize:ListDatasetImportJobs",
+ "personalize:TagResource",
+ "personalize:ListTagsForResource",
],
effect=iam.Effect.ALLOW,
resources=[
diff --git a/source/infrastructure/personalize/aws_lambda/functions/create_event_tracker.py b/source/infrastructure/personalize/aws_lambda/functions/create_event_tracker.py
index 48b5bea..5e87aa3 100644
--- a/source/infrastructure/personalize/aws_lambda/functions/create_event_tracker.py
+++ b/source/infrastructure/personalize/aws_lambda/functions/create_event_tracker.py
@@ -36,6 +36,7 @@ def __init__(
{
"serviceConfig": {
"name.$": "$.eventTracker.serviceConfig.name",
+ "tags.$": "$.eventTracker.serviceConfig.tags",
"datasetGroupArn.$": "$.datasetGroup.serviceConfig.datasetGroupArn",
},
"workflowConfig": {
@@ -57,6 +58,8 @@ def _set_permissions(self):
"personalize:DescribeEventTracker",
"personalize:ListEventTrackers",
"personalize:CreateEventTracker",
+ "personalize:TagResource",
+ "personalize:ListTagsForResource",
],
effect=iam.Effect.ALLOW,
resources=[
diff --git a/source/infrastructure/personalize/aws_lambda/functions/create_filter.py b/source/infrastructure/personalize/aws_lambda/functions/create_filter.py
index d540fcc..cc436a9 100644
--- a/source/infrastructure/personalize/aws_lambda/functions/create_filter.py
+++ b/source/infrastructure/personalize/aws_lambda/functions/create_filter.py
@@ -45,6 +45,8 @@ def _set_permissions(self):
"personalize:DescribeDatasetGroup",
"personalize:CreateFilter",
"personalize:DescribeFilter",
+ "personalize:TagResource",
+ "personalize:ListTagsForResource",
],
effect=iam.Effect.ALLOW,
resources=[
diff --git a/source/infrastructure/personalize/aws_lambda/functions/create_recommender.py b/source/infrastructure/personalize/aws_lambda/functions/create_recommender.py
index 1b8a93a..9449502 100644
--- a/source/infrastructure/personalize/aws_lambda/functions/create_recommender.py
+++ b/source/infrastructure/personalize/aws_lambda/functions/create_recommender.py
@@ -42,6 +42,8 @@ def _set_permissions(self):
"personalize:CreateRecommender",
"personalize:ListRecommenders",
"personalize:DescribeDatasetGroup",
+ "personalize:TagResource",
+ "personalize:ListTagsForResource",
],
effect=iam.Effect.ALLOW,
resources=[
diff --git a/source/infrastructure/personalize/aws_lambda/functions/create_solution.py b/source/infrastructure/personalize/aws_lambda/functions/create_solution.py
index 8c8aed5..7b686c9 100644
--- a/source/infrastructure/personalize/aws_lambda/functions/create_solution.py
+++ b/source/infrastructure/personalize/aws_lambda/functions/create_solution.py
@@ -42,6 +42,8 @@ def _set_permissions(self):
"personalize:CreateSolution",
"personalize:ListSolutions",
"personalize:DescribeDatasetGroup",
+ "personalize:TagResource",
+ "personalize:ListTagsForResource",
],
effect=iam.Effect.ALLOW,
resources=[
diff --git a/source/infrastructure/personalize/aws_lambda/functions/create_solution_version.py b/source/infrastructure/personalize/aws_lambda/functions/create_solution_version.py
index 1220ab6..38e2a8b 100644
--- a/source/infrastructure/personalize/aws_lambda/functions/create_solution_version.py
+++ b/source/infrastructure/personalize/aws_lambda/functions/create_solution_version.py
@@ -43,6 +43,8 @@ def _set_permissions(self):
"personalize:ListSolutionVersions",
"personalize:DescribeSolution",
"personalize:GetSolutionMetrics",
+ "personalize:TagResource",
+ "personalize:ListTagsForResource",
],
effect=iam.Effect.ALLOW,
resources=[
diff --git a/source/infrastructure/personalize/aws_lambda/layers/aws_solutions/requirements/requirements.txt b/source/infrastructure/personalize/aws_lambda/layers/aws_solutions/requirements/requirements.txt
index 7b9477c..7506f91 100644
--- a/source/infrastructure/personalize/aws_lambda/layers/aws_solutions/requirements/requirements.txt
+++ b/source/infrastructure/personalize/aws_lambda/layers/aws_solutions/requirements/requirements.txt
@@ -4,4 +4,4 @@ avro==1.11.1
cronex==0.1.3.1
jmespath==1.0.1
parsedatetime==2.6
-boto3==1.25.5
+boto3==1.26.47
diff --git a/source/infrastructure/personalize/step_functions/batch_inference_jobs_fragment.py b/source/infrastructure/personalize/step_functions/batch_inference_jobs_fragment.py
index fdff3bf..e9b0ea0 100644
--- a/source/infrastructure/personalize/step_functions/batch_inference_jobs_fragment.py
+++ b/source/infrastructure/personalize/step_functions/batch_inference_jobs_fragment.py
@@ -150,7 +150,7 @@ def __init__(
items_path="$.solution.batchInferenceJobs",
parameters={
"solutionVersionArn.$": "$.solution.solutionVersion.serviceConfig.solutionVersionArn",
- "batchInferenceJob.$": "$$.Map.Item.Value",
+ "batchInferenceJob.$": "$$.Map.Item.Value", # NOSONAR (python:S1192) - string for clarity
"batchInferenceJobName.$": f"States.Format('batch_{{}}_{{}}', $.solution.serviceConfig.name, {CURRENT_DATE_PATH})",
"bucket.$": BUCKET_PATH, # NOSONAR (python:S1192) - string for clarity
"currentDate.$": CURRENT_DATE_PATH, # NOSONAR (python:S1192) - string for clarity
diff --git a/source/infrastructure/personalize/step_functions/batch_segment_jobs_fragment.py b/source/infrastructure/personalize/step_functions/batch_segment_jobs_fragment.py
index f604c8a..15e24e7 100644
--- a/source/infrastructure/personalize/step_functions/batch_segment_jobs_fragment.py
+++ b/source/infrastructure/personalize/step_functions/batch_segment_jobs_fragment.py
@@ -148,7 +148,7 @@ def __init__(
items_path="$.solution.batchSegmentJobs",
parameters={
"solutionVersionArn.$": "$.solution.solutionVersion.serviceConfig.solutionVersionArn",
- "batchSegmentJob.$": "$$.Map.Item.Value",
+ "batchSegmentJob.$": "$$.Map.Item.Value", # NOSONAR (python:S1192) - string for clarity
"batchSegmentJobName.$": f"States.Format('batch_{{}}_{{}}', $.solution.serviceConfig.name, {CURRENT_DATE_PATH})",
"bucket.$": BUCKET_PATH, # NOSONAR (python:S1192) - string for clarity
"currentDate.$": CURRENT_DATE_PATH, # NOSONAR (python:S1192) - string for clarity
diff --git a/source/infrastructure/personalize/step_functions/dataset_import_fragment.py b/source/infrastructure/personalize/step_functions/dataset_import_fragment.py
index 3605e56..f0dd2e3 100644
--- a/source/infrastructure/personalize/step_functions/dataset_import_fragment.py
+++ b/source/infrastructure/personalize/step_functions/dataset_import_fragment.py
@@ -14,21 +14,20 @@
from aws_cdk import Duration
from aws_cdk.aws_stepfunctions import (
- StateMachineFragment,
- State,
- TaskInput,
- INextable,
Choice,
Condition,
+ INextable,
JsonPath,
Pass,
+ State,
+ StateMachineFragment,
+ TaskInput,
)
from constructs import Construct
-
from personalize.aws_lambda.functions import (
CreateDataset,
- CreateSchema,
CreateDatasetImportJob,
+ CreateSchema,
)
@@ -61,6 +60,13 @@ def __init__(
"jobName.$": f"States.Format('dataset_import_{id.lower()}_{{}}', $.currentDate)",
"datasetArn.$": f"$.datasets.{id.lower()}.dataset.serviceConfig.datasetArn",
}
+
+ service_input = {
+ "importMode": "FULL",
+ "publishAttributionMetricsToS3.$": "$.datasets.serviceConfig.publishAttributionMetricsToS3",
+ "tags.$": "$.datasets.serviceConfig.tags"
+ }
+
import_datasets_from_csv = create_dataset_import_job.state(self, f"Try {id} Dataset Import from CSV",
payload=TaskInput.from_object({
"serviceConfig": {
@@ -68,6 +74,7 @@ def __init__(
"dataSource": {
"dataLocation.$": f"States.Format('s3://{{}}/{{}}/{id.lower()}.csv', $.bucket.name, $.bucket.key)" # NOSONAR (python:S1192) - string for clarity
},
+ **service_input,
},
"workflowConfig": {
"maxAge.$": "$.datasetGroup.workflowConfig.maxAge", # NOSONAR (python:S1192) - string for clarity
@@ -86,6 +93,7 @@ def __init__(
"dataSource": {
"dataLocation.$": f"States.Format('s3://{{}}/{{}}/{id.lower()}', $.bucket.name, $.bucket.key)" # NOSONAR (python:S1192) - string for clarity
},
+ **service_input
},
"workflowConfig": {
"maxAge.$": "$.datasetGroup.workflowConfig.maxAge", # NOSONAR (python:S1192) - string for clarity
@@ -109,6 +117,7 @@ def __init__(
"schemaArn.$": f"$.datasets.{id.lower()}.schema.serviceConfig.schemaArn",
"datasetGroupArn.$": "$.datasetGroup.serviceConfig.datasetGroupArn",
"datasetType": f"{id.lower()}",
+ "tags.$": f"$.datasets.{id.lower()}.dataset.serviceConfig.tags",
},
"workflowConfig": {
"maxAge.$": "$.datasetGroup.workflowConfig.maxAge",
@@ -117,6 +126,7 @@ def __init__(
}),
result_path=f"$.datasets.{id.lower()}.dataset.serviceConfig",
**retry_config))
+
.next(import_datasets_from_prefix))
self._choice.otherwise(
na_state
diff --git a/source/infrastructure/personalize/step_functions/filter_fragment.py b/source/infrastructure/personalize/step_functions/filter_fragment.py
index 52b8c67..b9b7644 100644
--- a/source/infrastructure/personalize/step_functions/filter_fragment.py
+++ b/source/infrastructure/personalize/step_functions/filter_fragment.py
@@ -65,7 +65,7 @@ def __init__(
items_path="$.filters",
parameters={
"datasetGroupArn.$": "$.datasetGroup.serviceConfig.datasetGroupArn",
- "filter.$": "$$.Map.Item.Value",
+ "filter.$": "$$.Map.Item.Value", # NOSONAR (python:S1192) - string for clarity
},
result_path=JsonPath.DISCARD,
)
diff --git a/source/infrastructure/personalize/step_functions/solution_fragment.py b/source/infrastructure/personalize/step_functions/solution_fragment.py
index d130c4c..d260a83 100644
--- a/source/infrastructure/personalize/step_functions/solution_fragment.py
+++ b/source/infrastructure/personalize/step_functions/solution_fragment.py
@@ -10,31 +10,32 @@
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
+import time
+from datetime import datetime
from typing import List, Optional
from aws_cdk import Duration
from aws_cdk.aws_stepfunctions import (
- StateMachineFragment,
- State,
- INextable,
Choice,
- Pass,
- Map,
Condition,
+ INextable,
JsonPath,
+ Map,
Parallel,
+ Pass,
+ State,
StateMachine,
+ StateMachineFragment,
)
-from constructs import Construct
-
from aws_solutions.scheduler.cdk.construct import Scheduler
+from constructs import Construct
from personalize.aws_lambda.functions import (
- CreateSolution,
- CreateSolutionVersion,
- CreateCampaign,
CreateBatchInferenceJob,
CreateBatchSegmentJob,
+ CreateCampaign,
CreateRecommender,
+ CreateSolution,
+ CreateSolutionVersion,
)
from personalize.step_functions.batch_inference_jobs_fragment import (
BatchInferenceJobsFragment,
@@ -88,6 +89,7 @@ def __init__(
input_path="$.datasetGroupArn", # NOSONAR (python:S1192) - string for clarity
result_path="$.solution.serviceConfig.datasetGroupArn",
)
+
_prepare_recommender_input = Pass(
self,
"Prepare Recommender Input Data",
@@ -108,7 +110,8 @@ def __init__(
parameters={
"serviceConfig": {
"solutionArn.$": "$.solution.serviceConfig.solutionArn", # NOSONAR (python:S1192) - string for clarity
- "trainingMode": "FULL"
+ "trainingMode": "FULL",
+ "tags.$": "$.solution.serviceConfig.solutionVersion.tags" # NOSONAR (python:S1192) - string for clarity
},
"workflowConfig": {
"maxAge": "365 days", # do not create a new solution version on new file upload
@@ -176,8 +179,9 @@ def __init__(
"Set Solution Version ID",
parameters={
"serviceConfig": {
- "trainingMode.$": "$.solution.solutionVersion.serviceConfig.trainingMode",
+ "trainingMode.$": "$.solution.serviceConfig.solutionVersion.trainingMode",
"solutionArn.$": "$.solution.solutionVersion.serviceConfig.solutionArn", # NOSONAR (python:S1192) - string for clarity
+ "tags.$": "$.solution.solutionVersion.serviceConfig.tags" # NOSONAR (python:S1192) - string for clarity
},
"workflowConfig": {
"maxAge.$": "$.solution.solutionVersion.workflowConfig.maxAge",
@@ -224,7 +228,7 @@ def __init__(
items_path="$.solution.campaigns", # NOSONAR (python:S1192) - string for clarity
parameters={
"solutionVersionArn.$": "$.solution.solutionVersion.serviceConfig.solutionVersionArn",
- "campaign.$": "$$.Map.Item.Value",
+ "campaign.$": "$$.Map.Item.Value", # NOSONAR (python:S1192) - string for clarity
}
).iterator(_prepare_campaign_input
.next(_create_campaign))
@@ -260,8 +264,9 @@ def __init__(
"serviceConfig.$": "$.solution.serviceConfig",
"solutionVersion": {
"serviceConfig": {
- "trainingMode": "FULL",
+ "trainingMode.$": "$.solution.serviceConfig.solutionVersion.trainingMode",
"solutionArn.$": "$.solution.solutionVersion.serviceConfig.solutionArn", # NOSONAR (python:S1192) - string for clarity
+ "tags.$": "$.solution.solutionVersion.serviceConfig.tags" # NOSONAR (python:S1192) - string for clarity
},
"workflowConfig": {
"maxAge": MINIMUM_TIME
@@ -295,6 +300,7 @@ def __init__(
"serviceConfig": {
"trainingMode": "UPDATE",
"solutionArn.$": "$.solution.solutionVersion.serviceConfig.solutionArn", # NOSONAR (python:S1192) - string for clarity
+ "tags.$": "$.solution.solutionVersion.serviceConfig.tags" # NOSONAR (python:S1192) - string for clarity
},
"workflowConfig": {
"maxAge": MINIMUM_TIME,
@@ -327,7 +333,7 @@ def __init__(
parameters={
"datasetGroupArn.$": "$.datasetGroup.serviceConfig.datasetGroupArn",
"datasetGroupName.$": "$.datasetGroup.serviceConfig.name",
- "recommender.$": "$$.Map.Item.Value",
+ "recommender.$": "$$.Map.Item.Value", # NOSONAR (python:S1192) - string for clarity
"bucket.$": BUCKET_PATH,
"currentDate.$": CURRENT_DATE_PATH, # NOSONAR (python:S1192) - string for clarity
}
@@ -342,7 +348,7 @@ def __init__(
parameters={
"datasetGroupArn.$": "$.datasetGroup.serviceConfig.datasetGroupArn",
"datasetGroupName.$": "$.datasetGroup.serviceConfig.name",
- "solution.$": "$$.Map.Item.Value",
+ "solution.$": "$$.Map.Item.Value", # NOSONAR (python:S1192) - string for clarity
"bucket.$": BUCKET_PATH,
"currentDate.$": CURRENT_DATE_PATH, # NOSONAR (python:S1192) - string for clarity
}
diff --git a/source/requirements-dev.txt b/source/requirements-dev.txt
index f05b067..4724eb8 100644
--- a/source/requirements-dev.txt
+++ b/source/requirements-dev.txt
@@ -1,6 +1,6 @@
avro==1.11.1
black
-boto3==1.25.5
+boto3==1.26.47
aws_cdk_lib==2.44.0
aws_solutions_constructs.aws_lambda_sns==2.25.0
aws-cdk.aws-servicecatalogappregistry-alpha==2.44.0a0
@@ -9,10 +9,10 @@ crhelper==2.0.11
cronex==0.1.3.1
moto==2.3.0
parsedatetime==2.6
-pytest
-pytest-cov>=2.11.1
-pytest-env>=0.6.2
-pytest-mock>=3.5.1
+pytest>=7.2.0
+pytest-cov>=4.0.0
+pytest-env>=0.8.1
+pytest-mock>=3.10.0
pyyaml==5.4.1
responses~=0.17.0
tenacity==8.0.1
diff --git a/source/scheduler/README.md b/source/scheduler/README.md
index 3c9f80c..5b42176 100644
--- a/source/scheduler/README.md
+++ b/source/scheduler/README.md
@@ -1,24 +1,25 @@
# AWS Solutions Step Functions Scheduler
+
## Scheduling for AWS Step Functions
-This tooling adds scheduling support for AWS Step Functions via a set of libraries and CDK packages.
+This tooling adds scheduling support for AWS Step Functions via a set of libraries and CDK packages.
-This README summarizes using the scheduler.
+This README summarizes using the scheduler.
## Prerequisites
Install this package. It requires at least:
-- Python 3.7
-- AWS CDK version 2.7.0 or higher
+- Python 3.9
+- AWS CDK version 2.44.0 or higher
-To install the packages:
+To install the packages:
```
pip install /scheduler/cdk # where is the path to the scheduler namespace package
-pip install /scheduler/common # where is the path to the scheduler namespace package
+pip install /scheduler/common # where is the path to the scheduler namespace package
```
-
+
## 1. Add the scheduler to your CDK application
```python
@@ -66,18 +67,18 @@ SchedulerFragment(
# 3. Check the status of schedules using the included CLI
This package also provides a CLI `aws-solutions-scheduler`. This CLI can be used to control the scheduler and establish
-schedules for the [Maintaining Personalized Experiences with Machine Learning](https://aws.amazon.com/solutions/implementations/maintaining-personalized-experiences-with-ml/)
-solution.
+schedules for the [Maintaining Personalized Experiences with Machine Learning](https://aws.amazon.com/solutions/implementations/maintaining-personalized-experiences-with-ml/)
+solution.
### Installation
It is recommended that you perform the following steps in a dedicated virtual environment:
```shell
-cd source
-pip install --upgrade pip
+cd source
+pip install --upgrade pip
pip install cdk_solution_helper_py/helpers_common
-pip install scheduler/common
+pip install scheduler/common
```
### Usage
@@ -104,7 +105,7 @@ Commands:
#### Create new schedule(s) for an Amazon Personalize dataset group
-Schedules for dataset import, solution version FULL and UPDATE retraining can be established using the CLI for dataset
+Schedules for dataset import, solution version FULL and UPDATE retraining can be established using the CLI for dataset
groups in Amazon Personalize. This example creates a weekly schedule for full dataset import (`-i`) and for full
solution version retraining (-f)
@@ -117,15 +118,16 @@ solution version retraining (-f)
```shell
> aws-solutions-scheduler -s PersonalizeStack -r us-east-1 list
```
+
See sample result
```json
{
- "tasks": [
- "personalize-dataset-import-item-recommender",
- "solution-maintenance-full-item-recommender-user-personalization"
- ]
+ "tasks": [
+ "personalize-dataset-import-item-recommender",
+ "solution-maintenance-full-item-recommender-user-personalization"
+ ]
}
```
@@ -136,18 +138,19 @@ solution version retraining (-f)
```shell
> aws-solutions-scheduler -s PersonalizeStack -r us-east-1 describe --task personalize-dataset-import-item-recommender
```
+
See sample result
```json
{
- "task": {
- "active": true,
- "name": "personalize-dataset-import-item-recommender",
- "schedule": "cron(*/15 * * * ? *)",
- "step_function": "arn:aws:states:us-east-1:111122223333:stateMachine:personalizestack-periodic-dataset-import-aaaaaaaaaaaa",
- "version": "v1"
- }
+ "task": {
+ "active": true,
+ "name": "personalize-dataset-import-item-recommender",
+ "schedule": "cron(*/15 * * * ? *)",
+ "step_function": "arn:aws:states:us-east-1:111122223333:stateMachine:personalizestack-periodic-dataset-import-aaaaaaaaaaaa",
+ "version": "v1"
+ }
}
```
@@ -160,18 +163,19 @@ Deactivate schedules can be activated
```shell
> aws-solutions-scheduler -s PersonalizeStack -r us-east-1 activate --task personalize-dataset-import-item-recommender
```
+
See sample result
```json
{
- "task": {
- "active": true,
- "name": "personalize-dataset-import-item-recommender",
- "schedule": "cron(0 0 ? * 1 *)",
- "step_function": "arn:aws:states:us-east-1:111122223333:stateMachine:personalizestack-periodic-dataset-import-aaaaaaaaaaaa",
- "version": "v1"
- }
+ "task": {
+ "active": true,
+ "name": "personalize-dataset-import-item-recommender",
+ "schedule": "cron(0 0 ? * 1 *)",
+ "step_function": "arn:aws:states:us-east-1:111122223333:stateMachine:personalizestack-periodic-dataset-import-aaaaaaaaaaaa",
+ "version": "v1"
+ }
}
```
@@ -184,24 +188,25 @@ Deactivate schedules can be activated
```shell
> aws-solutions-scheduler -s PersonalizeStack -r us-east-1 deactivate --task personalize-dataset-import-item-recommender
```
+
See sample result
```json
{
- "task": {
- "active": false,
- "name": "personalize-dataset-import-item-recommender",
- "schedule": "cron(0 0 ? * 1 *)",
- "step_function": "arn:aws:states:us-east-1:111122223333:stateMachine:personalizestack-periodic-dataset-import-aaaaaaaaaaaa",
- "version": "v1"
- }
+ "task": {
+ "active": false,
+ "name": "personalize-dataset-import-item-recommender",
+ "schedule": "cron(0 0 ? * 1 *)",
+ "step_function": "arn:aws:states:us-east-1:111122223333:stateMachine:personalizestack-periodic-dataset-import-aaaaaaaaaaaa",
+ "version": "v1"
+ }
}
```
-***
+---
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
@@ -215,4 +220,4 @@ Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
-limitations under the License.
\ No newline at end of file
+limitations under the License.
diff --git a/source/scheduler/cdk/setup.py b/source/scheduler/cdk/setup.py
index 3d2e178..908bce1 100644
--- a/source/scheduler/cdk/setup.py
+++ b/source/scheduler/cdk/setup.py
@@ -45,7 +45,7 @@ def get_version():
"pip>=22.3.1",
"aws_cdk_lib==2.44.0",
"Click==8.1.3",
- "boto3==1.25.5",
+ "boto3==1.26.47",
],
python_requires=">=3.9",
classifiers=[
diff --git a/source/scheduler/common/setup.py b/source/scheduler/common/setup.py
index bad7709..a0f6e7f 100644
--- a/source/scheduler/common/setup.py
+++ b/source/scheduler/common/setup.py
@@ -43,12 +43,12 @@ def get_version():
packages=setuptools.find_namespace_packages(exclude=["build*"]),
install_requires=[
"pip>=22.3.1",
- "aws-lambda-powertools==1.29.2",
+ "aws-lambda-powertools==2.10.0",
"aws-xray-sdk==2.11.0",
"aws-solutions-python==2.0.0",
"click==8.1.3",
"cronex==0.1.3.1",
- "boto3==1.25.5",
+ "boto3==1.26.47",
"requests==2.28.1",
"crhelper==2.0.11",
"rich==12.6.0",
diff --git a/source/tests/aspects/test_personalize_app_stack.py b/source/tests/aspects/test_personalize_app_stack.py
index cd579f0..92ab843 100644
--- a/source/tests/aspects/test_personalize_app_stack.py
+++ b/source/tests/aspects/test_personalize_app_stack.py
@@ -66,11 +66,11 @@ def test_service_catalog_registry_application(synth_template):
"Tags": {
"SOLUTION_ID": "SO0170",
"SOLUTION_NAME": "Maintaining Personalized Experiences with Machine Learning",
- "SOLUTION_VERSION": "v1.3.1",
+ "SOLUTION_VERSION": "v1.4.0",
"Solutions:ApplicationType": "AWS-Solutions",
"Solutions:SolutionID": "SO0170",
"Solutions:SolutionName": "Maintaining Personalized Experiences with Machine Learning",
- "Solutions:SolutionVersion": "v1.3.1",
+ "Solutions:SolutionVersion": "v1.4.0",
},
},
)
diff --git a/source/tests/aws_lambda/__init__.py b/source/tests/aws_lambda/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_batch_inference_job/__init__.py b/source/tests/aws_lambda/create_batch_inference_job/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_batch_inference_job/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_batch_inference_job/test_batch_inference_job_handler.py b/source/tests/aws_lambda/create_batch_inference_job/test_batch_inference_job_handler.py
index 1e3fff4..8bd76a4 100644
--- a/source/tests/aws_lambda/create_batch_inference_job/test_batch_inference_job_handler.py
+++ b/source/tests/aws_lambda/create_batch_inference_job/test_batch_inference_job_handler.py
@@ -11,17 +11,122 @@
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
-import pytest
+import os
+import pytest
from aws_lambda.create_batch_inference_job.handler import (
- lambda_handler,
+ CONFIG,
RESOURCE,
STATUS,
- CONFIG,
+ lambda_handler,
)
+from botocore.exceptions import ParamValidationError
+from moto import mock_sts
+from shared.exceptions import ResourcePending
+from shared.resource import BatchInferenceJob, SolutionVersion
+
+batch_inference_name = "mockBatchJob"
+solution_version_name = "mockSolutionVersion"
def test_create_batch_inference_job_handler(validate_handler_config):
validate_handler_config(RESOURCE, CONFIG, STATUS)
with pytest.raises(ValueError):
lambda_handler({}, None)
+
+
+@mock_sts
+def test_batch_inference_tags(monkeypatch, personalize_stubber, notifier_stubber):
+ batch_inference_arn = BatchInferenceJob().arn(batch_inference_name)
+ solution_version_arn = SolutionVersion().arn(solution_version_name)
+ os.environ["ROLE_ARN"] = "roleArn"
+ personalize_stubber.add_response(
+ method="list_batch_inference_jobs",
+ expected_params={
+ "solutionVersionArn": solution_version_arn,
+ },
+ service_response={"batchInferenceJobs": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_batch_inference_job",
+ expected_params={
+ "jobName": batch_inference_name,
+ "solutionVersionArn": solution_version_arn,
+ "jobInput": {"s3DataSource": {"path": "s3Path1", "kmsKeyArn": "kmsArn"}},
+ "jobOutput": {"s3DataDestination": {"path": "s3Path2", "kmsKeyArn": "kmsArn"}},
+ "roleArn": os.getenv("ROLE_ARN"),
+ "tags": [
+ {"tagKey": "batchInference-1", "tagValue": "batchInference-key-1"},
+ ],
+ },
+ service_response={"batchInferenceJobArn": batch_inference_arn},
+ )
+
+ with pytest.raises(ResourcePending):
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "jobName": batch_inference_name,
+ "jobInput": {"s3DataSource": {"path": "s3Path1", "kmsKeyArn": "kmsArn"}},
+ "jobOutput": {"s3DataDestination": {"path": "s3Path2", "kmsKeyArn": "kmsArn"}},
+ "tags": [{"tagKey": "batchInference-1", "tagValue": "batchInference-key-1"}],
+ "solutionVersionArn": solution_version_arn,
+ }
+ },
+ None,
+ )
+
+ assert notifier_stubber.has_notified_for_creation
+ assert notifier_stubber.latest_notification_status == "CREATING"
+
+ del os.environ["ROLE_ARN"]
+
+
+@mock_sts
+def test_bad_batch_inference_tags1(personalize_stubber):
+ os.environ["ROLE_ARN"] = "roleArn"
+ batch_inference_arn = BatchInferenceJob().arn(batch_inference_name)
+ solution_version_arn = SolutionVersion().arn(solution_version_name)
+
+ personalize_stubber.add_response(
+ method="list_batch_inference_jobs",
+ expected_params={
+ "solutionVersionArn": solution_version_arn,
+ },
+ service_response={"batchInferenceJobs": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_batch_inference_job",
+ expected_params={
+ "jobName": batch_inference_name,
+ "solutionVersionArn": solution_version_arn,
+ "jobInput": {"s3DataSource": {"path": "s3Path1", "kmsKeyArn": "kmsArn"}},
+ "jobOutput": {"s3DataDestination": {"path": "s3Path2", "kmsKeyArn": "kmsArn"}},
+ "roleArn": os.getenv("ROLE_ARN"),
+ "tags": "bad data",
+ },
+ service_response={"batchInferenceJobArn": batch_inference_arn},
+ )
+
+ try:
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "jobName": batch_inference_name,
+ "jobInput": {"s3DataSource": {"path": "s3Path1", "kmsKeyArn": "kmsArn"}},
+ "jobOutput": {"s3DataDestination": {"path": "s3Path2", "kmsKeyArn": "kmsArn"}},
+ "tags": "bad data",
+ "solutionVersionArn": solution_version_arn,
+ }
+ },
+ None,
+ )
+ except ParamValidationError as exp:
+ assert (
+ exp.kwargs["report"]
+ == "Invalid type for parameter tags, value: bad data, type: , valid types: , "
+ )
+
+ del os.environ["ROLE_ARN"]
diff --git a/source/tests/aws_lambda/create_batch_segment_job/__init__.py b/source/tests/aws_lambda/create_batch_segment_job/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_batch_segment_job/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_batch_segment_job/test_batch_segment_job_handler.py b/source/tests/aws_lambda/create_batch_segment_job/test_batch_segment_job_handler.py
index 429ac0c..1c4294d 100644
--- a/source/tests/aws_lambda/create_batch_segment_job/test_batch_segment_job_handler.py
+++ b/source/tests/aws_lambda/create_batch_segment_job/test_batch_segment_job_handler.py
@@ -11,17 +11,123 @@
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
-import pytest
+import os
+import pytest
from aws_lambda.create_batch_segment_job.handler import (
- lambda_handler,
+ CONFIG,
RESOURCE,
STATUS,
- CONFIG,
+ lambda_handler,
)
+from botocore.exceptions import ParamValidationError
+from moto import mock_sts
+from shared.exceptions import ResourcePending
+from shared.resource import BatchSegmentJob, SolutionVersion
+
+batch_segment_name = "mockBatchJob"
+solution_version_name = "mockSolutionVersion"
def test_create_batch_segment_job_handler(validate_handler_config):
validate_handler_config(RESOURCE, CONFIG, STATUS)
with pytest.raises(ValueError):
lambda_handler({}, None)
+
+
+@mock_sts
+def test_batch_segment_tags(monkeypatch, personalize_stubber, notifier_stubber):
+ os.environ["ROLE_ARN"] = "roleArn"
+ batch_segment_arn = BatchSegmentJob().arn(batch_segment_name)
+ solution_version_arn = SolutionVersion().arn(solution_version_name)
+
+ personalize_stubber.add_response(
+ method="list_batch_segment_jobs",
+ expected_params={
+ "solutionVersionArn": solution_version_arn,
+ },
+ service_response={"batchSegmentJobs": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_batch_segment_job",
+ expected_params={
+ "jobName": batch_segment_name,
+ "solutionVersionArn": solution_version_arn,
+ "jobInput": {"s3DataSource": {"path": "s3Path1", "kmsKeyArn": "kmsArn"}},
+ "jobOutput": {"s3DataDestination": {"path": "s3Path2", "kmsKeyArn": "kmsArn"}},
+ "roleArn": os.getenv("ROLE_ARN"),
+ "tags": [
+ {"tagKey": "batchSegment-1", "tagValue": "batchSegment-key-1"},
+ ],
+ },
+ service_response={"batchSegmentJobArn": batch_segment_arn},
+ )
+
+ with pytest.raises(ResourcePending):
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "jobName": batch_segment_name,
+ "jobInput": {"s3DataSource": {"path": "s3Path1", "kmsKeyArn": "kmsArn"}},
+ "jobOutput": {"s3DataDestination": {"path": "s3Path2", "kmsKeyArn": "kmsArn"}},
+ "tags": [{"tagKey": "batchSegment-1", "tagValue": "batchSegment-key-1"}],
+ "solutionVersionArn": solution_version_arn,
+ }
+ },
+ None,
+ )
+
+ assert notifier_stubber.has_notified_for_creation
+ assert notifier_stubber.latest_notification_status == "CREATING"
+
+ del os.environ["ROLE_ARN"]
+
+
+@mock_sts
+def test_bad_batch_segment_tags(personalize_stubber):
+ os.environ["ROLE_ARN"] = "roleArn"
+ batch_segment_arn = BatchSegmentJob().arn(batch_segment_name)
+ solution_version_arn = SolutionVersion().arn(solution_version_name)
+
+ personalize_stubber.add_response(
+ method="list_batch_segment_jobs",
+ expected_params={
+ "solutionVersionArn": solution_version_arn,
+ },
+ service_response={"batchSegmentJobs": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_batch_segment_job",
+ expected_params={
+ "jobName": batch_segment_name,
+ "solutionVersionArn": solution_version_arn,
+ "jobInput": {"s3DataSource": {"path": "s3Path1", "kmsKeyArn": "kmsArn"}},
+ "jobOutput": {"s3DataDestination": {"path": "s3Path2", "kmsKeyArn": "kmsArn"}},
+ "roleArn": os.getenv("ROLE_ARN"),
+ "tags": "bad data",
+ },
+ service_response={"batchSegmentJobArn": batch_segment_arn},
+ )
+
+ try:
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "jobName": batch_segment_name,
+ "jobInput": {"s3DataSource": {"path": "s3Path1", "kmsKeyArn": "kmsArn"}},
+ "jobOutput": {"s3DataDestination": {"path": "s3Path2", "kmsKeyArn": "kmsArn"}},
+ "tags": "bad data",
+ "solutionVersionArn": solution_version_arn,
+ }
+ },
+ None,
+ )
+ except ParamValidationError as exp:
+ assert (
+ exp.kwargs["report"]
+ == "Invalid type for parameter tags, value: bad data, type: , valid types: , "
+ )
+
+ del os.environ["ROLE_ARN"]
diff --git a/source/tests/aws_lambda/create_campaign/__init__.py b/source/tests/aws_lambda/create_campaign/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_campaign/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_campaign/test_create_campaign_handler.py b/source/tests/aws_lambda/create_campaign/test_create_campaign_handler.py
index 613ee5a..125e64e 100644
--- a/source/tests/aws_lambda/create_campaign/test_create_campaign_handler.py
+++ b/source/tests/aws_lambda/create_campaign/test_create_campaign_handler.py
@@ -14,16 +14,11 @@
from datetime import datetime, timedelta
import pytest
+from aws_lambda.create_campaign.handler import CONFIG, RESOURCE, STATUS, lambda_handler
+from botocore.exceptions import ParamValidationError
from dateutil.parser import isoparse
from dateutil.tz import tzlocal
from moto import mock_sts
-
-from aws_lambda.create_campaign.handler import (
- lambda_handler,
- RESOURCE,
- STATUS,
- CONFIG,
-)
from shared.exceptions import ResourcePending
from shared.resource import Campaign, SolutionVersion
@@ -38,14 +33,14 @@ def test_create_campaign(validate_handler_config):
@mock_sts
def test_describe_campaign_response(personalize_stubber, notifier_stubber):
- c_name = "cp_name"
+ campaign_name = "mockCampaign"
sv_arn = SolutionVersion().arn("unit_test", sv_id="12345678")
personalize_stubber.add_response(
method="describe_campaign",
service_response={
"campaign": {
- "campaignArn": Campaign().arn(c_name),
- "name": c_name,
+ "campaignArn": Campaign().arn(campaign_name),
+ "name": campaign_name,
"solutionVersionArn": sv_arn,
"minProvisionedTPS": 1,
"status": "ACTIVE",
@@ -53,15 +48,16 @@ def test_describe_campaign_response(personalize_stubber, notifier_stubber):
"creationDateTime": datetime.now(tz=tzlocal()) - timedelta(seconds=100),
}
},
- expected_params={"campaignArn": Campaign().arn(c_name)},
+ expected_params={"campaignArn": Campaign().arn(campaign_name)},
)
result = lambda_handler(
{
"serviceConfig": {
- "name": c_name,
+ "name": campaign_name,
"solutionVersionArn": sv_arn,
"minProvisionedTPS": 1,
+ "tags": [{"tagKey": "campaign-1", "tagValue": "campaign-key-1"}],
},
"workflowConfig": {
"maxAge": "365 days",
@@ -77,28 +73,28 @@ def test_describe_campaign_response(personalize_stubber, notifier_stubber):
@mock_sts
def test_create_campaign_response(personalize_stubber, notifier_stubber):
- c_name = "cp_name"
+ campaign_name = "mockCampaign"
sv_arn = SolutionVersion().arn("unit_test", sv_id="12345678")
personalize_stubber.add_client_error(
method="describe_campaign",
service_error_code="ResourceNotFoundException",
- expected_params={"campaignArn": Campaign().arn(c_name)},
+ expected_params={"campaignArn": Campaign().arn(campaign_name)},
)
personalize_stubber.add_response(
method="create_campaign",
expected_params={
- "name": c_name,
+ "name": campaign_name,
"minProvisionedTPS": 1,
"solutionVersionArn": sv_arn,
},
- service_response={"campaignArn": Campaign().arn(c_name)},
+ service_response={"campaignArn": Campaign().arn(campaign_name)},
)
with pytest.raises(ResourcePending):
lambda_handler(
{
"serviceConfig": {
- "name": c_name,
+ "name": campaign_name,
"solutionVersionArn": sv_arn,
"minProvisionedTPS": 1,
},
@@ -116,15 +112,15 @@ def test_create_campaign_response(personalize_stubber, notifier_stubber):
@mock_sts
def test_update_campaign_start(personalize_stubber, notifier_stubber):
- c_name = "cp_name"
+ campaign_name = "mockCampaign"
sv_arn_old = SolutionVersion().arn("unit_test", sv_id="12345678")
sv_arn_new = SolutionVersion().arn("unit_test", sv_id="01234567")
personalize_stubber.add_response(
method="describe_campaign",
service_response={
"campaign": {
- "campaignArn": Campaign().arn(c_name),
- "name": c_name,
+ "campaignArn": Campaign().arn(campaign_name),
+ "name": campaign_name,
"solutionVersionArn": sv_arn_old,
"minProvisionedTPS": 1,
"status": "ACTIVE",
@@ -132,15 +128,15 @@ def test_update_campaign_start(personalize_stubber, notifier_stubber):
"creationDateTime": datetime.now(tz=tzlocal()) - timedelta(seconds=100),
}
},
- expected_params={"campaignArn": Campaign().arn(c_name)},
+ expected_params={"campaignArn": Campaign().arn(campaign_name)},
)
personalize_stubber.add_response(
method="update_campaign",
service_response={
- "campaignArn": Campaign().arn(c_name),
+ "campaignArn": Campaign().arn(campaign_name),
},
expected_params={
- "campaignArn": Campaign().arn(c_name),
+ "campaignArn": Campaign().arn(campaign_name),
"minProvisionedTPS": 1,
"solutionVersionArn": sv_arn_new,
},
@@ -149,11 +145,7 @@ def test_update_campaign_start(personalize_stubber, notifier_stubber):
with pytest.raises(ResourcePending):
lambda_handler(
{
- "serviceConfig": {
- "name": c_name,
- "solutionVersionArn": sv_arn_new,
- "minProvisionedTPS": 1,
- },
+ "serviceConfig": {"name": campaign_name, "solutionVersionArn": sv_arn_new, "minProvisionedTPS": 1},
"workflowConfig": {
"maxAge": "365 days",
"timeStarted": "2021-10-19T15:18:32Z",
@@ -168,15 +160,15 @@ def test_update_campaign_start(personalize_stubber, notifier_stubber):
@mock_sts
def test_describe_campaign_response_updating(personalize_stubber, notifier_stubber):
- c_name = "cp_name"
+ campaign_name = "mockCampaign"
sv_arn_old = SolutionVersion().arn("unit_test", sv_id="12345678")
sv_arn_new = SolutionVersion().arn("unit_test", sv_id="01234567")
personalize_stubber.add_response(
method="describe_campaign",
service_response={
"campaign": {
- "campaignArn": Campaign().arn(c_name),
- "name": c_name,
+ "campaignArn": Campaign().arn(campaign_name),
+ "name": campaign_name,
"solutionVersionArn": sv_arn_old,
"minProvisionedTPS": 1,
"status": "ACTIVE",
@@ -191,7 +183,7 @@ def test_describe_campaign_response_updating(personalize_stubber, notifier_stubb
},
}
},
- expected_params={"campaignArn": Campaign().arn(c_name)},
+ expected_params={"campaignArn": Campaign().arn(campaign_name)},
)
personalize_stubber.add_client_error(
method="update_campaign",
@@ -199,13 +191,9 @@ def test_describe_campaign_response_updating(personalize_stubber, notifier_stubb
)
with pytest.raises(ResourcePending):
- result = lambda_handler(
+ lambda_handler(
{
- "serviceConfig": {
- "name": c_name,
- "solutionVersionArn": sv_arn_new,
- "minProvisionedTPS": 1,
- },
+ "serviceConfig": {"name": campaign_name, "solutionVersionArn": sv_arn_new, "minProvisionedTPS": 1},
"workflowConfig": {
"maxAge": "365 days",
"timeStarted": "2021-10-19T15:18:32Z",
@@ -220,14 +208,14 @@ def test_describe_campaign_response_updating(personalize_stubber, notifier_stubb
@mock_sts
def test_describe_campaign_response_updated(personalize_stubber, notifier_stubber):
- c_name = "cp_name"
+ campaign_name = "mockCampaign"
sv_arn_new = SolutionVersion().arn("unit_test", sv_id="01234567")
personalize_stubber.add_response(
method="describe_campaign",
service_response={
"campaign": {
- "campaignArn": Campaign().arn(c_name),
- "name": c_name,
+ "campaignArn": Campaign().arn(campaign_name),
+ "name": campaign_name,
"solutionVersionArn": sv_arn_new,
"minProvisionedTPS": 1,
"status": "ACTIVE",
@@ -242,15 +230,16 @@ def test_describe_campaign_response_updated(personalize_stubber, notifier_stubbe
},
}
},
- expected_params={"campaignArn": Campaign().arn(c_name)},
+ expected_params={"campaignArn": Campaign().arn(campaign_name)},
)
result = lambda_handler(
{
"serviceConfig": {
- "name": c_name,
+ "name": campaign_name,
"solutionVersionArn": sv_arn_new,
"minProvisionedTPS": 1,
+ "tags": [{"tagKey": "campaign-1", "tagValue": "campaign-key-1"}],
},
"workflowConfig": {
"maxAge": "365 days",
@@ -267,3 +256,53 @@ def test_describe_campaign_response_updated(personalize_stubber, notifier_stubbe
last_updated = isoparse(notifier_stubber.get_resource_last_updated(Campaign(), {"campaign": result}))
created = isoparse(notifier_stubber.get_resource_created(Campaign(), {"campaign": result}))
assert (last_updated - created).seconds == 100
+
+
+@mock_sts
+def test_bad_campaign_tags(personalize_stubber, notifier_stubber):
+ campaign_name = "mockCampaign"
+ sv_arn_new = SolutionVersion().arn("unit_test", sv_id="01234567")
+ personalize_stubber.add_response(
+ method="describe_campaign",
+ service_response={
+ "campaign": {
+ "campaignArn": Campaign().arn(campaign_name),
+ "name": campaign_name,
+ "solutionVersionArn": sv_arn_new,
+ "minProvisionedTPS": 1,
+ "status": "ACTIVE",
+ "lastUpdatedDateTime": datetime.now(tzlocal()) - timedelta(seconds=1000),
+ "creationDateTime": datetime.now(tz=tzlocal()) - timedelta(seconds=1100),
+ "latestCampaignUpdate": {
+ "minProvisionedTPS": 1,
+ "solutionVersionArn": sv_arn_new,
+ "creationDateTime": datetime.now(tzlocal()) - timedelta(seconds=100),
+ "lastUpdatedDateTime": datetime.now(tzlocal()),
+ "status": "ACTIVE",
+ },
+ }
+ },
+ expected_params={"campaignArn": Campaign().arn(campaign_name)},
+ )
+
+ try:
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": campaign_name,
+ "solutionVersionArn": sv_arn_new,
+ "minProvisionedTPS": 1,
+ "tags": "bad data",
+ },
+ "workflowConfig": {
+ "maxAge": "365 days",
+ "timeStarted": "2021-10-19T15:18:32Z",
+ },
+ },
+ None,
+ )
+ except ParamValidationError as exp:
+ assert (
+ exp.kwargs["report"]
+ == "Invalid type for parameter tags, value: bad data, type: , valid types: , "
+ )
diff --git a/source/tests/aws_lambda/create_config/__init__.py b/source/tests/aws_lambda/create_config/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_config/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_config/test_create_config_handler.py b/source/tests/aws_lambda/create_config/test_create_config_handler.py
index b9c9578..85542de 100644
--- a/source/tests/aws_lambda/create_config/test_create_config_handler.py
+++ b/source/tests/aws_lambda/create_config/test_create_config_handler.py
@@ -13,15 +13,15 @@
from aws_lambda.create_config.handler import lambda_handler
from shared.resource import (
- DatasetGroup,
- Dataset,
- Solution,
- Campaign,
- SolutionVersion,
BatchInferenceJob,
+ BatchSegmentJob,
+ Campaign,
+ Dataset,
+ DatasetGroup,
EventTracker,
Schema,
- BatchSegmentJob,
+ Solution,
+ SolutionVersion,
)
diff --git a/source/tests/aws_lambda/create_dataset/__init__.py b/source/tests/aws_lambda/create_dataset/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_dataset/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_dataset/test_dataset_handler.py b/source/tests/aws_lambda/create_dataset/test_dataset_handler.py
index 80ac0d6..be018a5 100644
--- a/source/tests/aws_lambda/create_dataset/test_dataset_handler.py
+++ b/source/tests/aws_lambda/create_dataset/test_dataset_handler.py
@@ -11,16 +11,107 @@
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
+from datetime import datetime, timedelta
+
import pytest
+from aws_lambda.create_dataset.handler import CONFIG, RESOURCE, lambda_handler
+from botocore.exceptions import ParamValidationError
+from dateutil.tz import tzlocal
+from moto import mock_sts
+from shared.exceptions import ResourcePending
+from shared.resource import Dataset, DatasetGroup
-from aws_lambda.create_dataset.handler import (
- lambda_handler,
- RESOURCE,
- CONFIG,
-)
+dataset_group_name = "mockDatasetGroup"
+dataset_name = "mockDataset"
def test_create_dataset_handler(validate_handler_config):
validate_handler_config(RESOURCE, CONFIG)
with pytest.raises(ValueError):
lambda_handler({}, None)
+
+
+@mock_sts
+def test_dataset_tags(personalize_stubber, notifier_stubber):
+ dataset_arn = Dataset().arn(dataset_name)
+ dataset_group_arn = DatasetGroup().arn(dataset_group_name)
+
+ personalize_stubber.add_response(
+ method="list_datasets",
+ expected_params={"datasetGroupArn": dataset_group_arn},
+ service_response={"datasets": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_dataset",
+ expected_params={
+ "name": dataset_name,
+ "schemaArn": "schemaArn",
+ "datasetGroupArn": dataset_group_arn,
+ "datasetType": "INTERACTIONS",
+ "tags": [
+ {"tagKey": "dataset-1", "tagValue": "dataset-key-1"},
+ ],
+ },
+ service_response={"datasetArn": dataset_arn},
+ )
+
+ with pytest.raises(ResourcePending):
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": dataset_name,
+ "schemaArn": "schemaArn",
+ "datasetGroupArn": dataset_group_arn,
+ "datasetType": "INTERACTIONS",
+ "tags": [{"tagKey": "dataset-1", "tagValue": "dataset-key-1"}],
+ }
+ },
+ None,
+ )
+
+ assert notifier_stubber.has_notified_for_creation
+ assert notifier_stubber.latest_notification_status == "CREATING"
+
+
+@mock_sts
+def test_bad_dataset_tags(personalize_stubber):
+ dataset_arn = Dataset().arn(dataset_name)
+ dataset_group_arn = DatasetGroup().arn(dataset_group_name)
+
+ personalize_stubber.add_response(
+ method="list_datasets",
+ expected_params={"datasetGroupArn": dataset_group_arn},
+ service_response={"datasets": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_dataset",
+ expected_params={
+ "name": dataset_name,
+ "schemaArn": "schemaArn",
+ "datasetGroupArn": dataset_group_arn,
+ "datasetType": "INTERACTIONS",
+ "tags": "bad data",
+ },
+ service_response={"datasetArn": dataset_arn},
+ )
+
+ try:
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": dataset_name,
+ "schemaArn": "schemaArn",
+ "datasetGroupArn": dataset_group_arn,
+ "datasetType": "INTERACTIONS",
+ "tags": "bad data",
+ }
+ },
+ None,
+ )
+ except ParamValidationError as exp:
+ assert (
+ exp.kwargs["report"]
+ == "Invalid type for parameter tags, value: bad data, type: , valid types: , "
+ )
diff --git a/source/tests/aws_lambda/create_dataset_group/__init__.py b/source/tests/aws_lambda/create_dataset_group/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_dataset_group/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_dataset_group/test_dataset_group_handler.py b/source/tests/aws_lambda/create_dataset_group/test_dataset_group_handler.py
index caedbd7..22846f9 100644
--- a/source/tests/aws_lambda/create_dataset_group/test_dataset_group_handler.py
+++ b/source/tests/aws_lambda/create_dataset_group/test_dataset_group_handler.py
@@ -11,17 +11,150 @@
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
-import pytest
+from datetime import datetime, timedelta
+import pytest
from aws_lambda.create_dataset_group.handler import (
- lambda_handler,
+ CONFIG,
RESOURCE,
STATUS,
- CONFIG,
+ lambda_handler,
)
+from botocore.exceptions import ParamValidationError
+from dateutil.tz import tzlocal
+from moto import mock_sts
+from shared.exceptions import ResourcePending
+from shared.personalize_service import Personalize
+from shared.resource import DatasetGroup
+
+dataset_group_name = "mockDatasetGroup"
def test_handler(validate_handler_config):
validate_handler_config(RESOURCE, CONFIG, STATUS)
with pytest.raises(ValueError):
lambda_handler({}, None)
+
+
+@mock_sts
+def test_dsg_tags(personalize_stubber, notifier_stubber):
+ """
+ The typical workflow is to describe, then create, then raise ResourcePending
+ """
+ dataset_group_arn = DatasetGroup().arn(dataset_group_name)
+ personalize_stubber.add_client_error(
+ method="describe_dataset_group",
+ service_error_code="ResourceNotFoundException",
+ expected_params={"datasetGroupArn": dataset_group_arn},
+ )
+ personalize_stubber.add_response(
+ method="create_dataset_group",
+ expected_params={
+ "name": dataset_group_name,
+ "tags": [
+ {"tagKey": "datasetGroup-1", "tagValue": "datasetGroup-key-1"},
+ ],
+ },
+ service_response={"datasetGroupArn": dataset_group_arn},
+ )
+
+ with pytest.raises(ResourcePending):
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": dataset_group_name,
+ "tags": [{"tagKey": "datasetGroup-1", "tagValue": "datasetGroup-key-1"}],
+ }
+ },
+ None,
+ )
+
+ assert notifier_stubber.has_notified_for_creation
+ assert notifier_stubber.latest_notification_status == "CREATING"
+
+
+@mock_sts
+def test_dsg_bad_tags(personalize_stubber):
+ """
+ The typical workflow is to describe, then create, then raise ResourcePending
+ """
+ dataset_group_arn = DatasetGroup().arn(dataset_group_name)
+ personalize_stubber.add_client_error(
+ method="describe_dataset_group",
+ service_error_code="ResourceNotFoundException",
+ expected_params={"datasetGroupArn": dataset_group_arn},
+ )
+ personalize_stubber.add_response(
+ method="create_dataset_group",
+ expected_params={
+ "name": dataset_group_name,
+ "tags": "bad data",
+ },
+ service_response={"datasetGroupArn": dataset_group_arn},
+ )
+
+ try:
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": dataset_group_name,
+ "tags": "bad data",
+ }
+ },
+ None,
+ )
+ except ParamValidationError as exp:
+ assert (
+ exp.kwargs["report"]
+ == "Invalid type for parameter tags, value: bad data, type: , valid types: , "
+ )
+
+
+@mock_sts
+def test_dsg_list_tags(personalize_stubber, notifier_stubber):
+ """
+ The typical workflow is to describe, then create, then raise ResourcePending
+ """
+ dsg_name = "mockDatasetGroup"
+ dataset_group_arn = DatasetGroup().arn(dataset_group_name)
+ personalize_stubber.add_response(
+ method="describe_dataset_group",
+ service_response={
+ "datasetGroup": {
+ "name": dsg_name,
+ "datasetGroupArn": dataset_group_arn,
+ "status": "ACTIVE",
+ "lastUpdatedDateTime": datetime.now(tzlocal()),
+ "creationDateTime": datetime.now(tz=tzlocal()) - timedelta(seconds=100),
+ "roleArn": "roleArn",
+ "kmsKeyArn": "kmsArn",
+ }
+ },
+ expected_params={"datasetGroupArn": dataset_group_arn},
+ )
+
+ personalize_stubber.add_response(
+ method="list_tags_for_resource",
+ expected_params={"resourceArn": dataset_group_arn},
+ service_response={
+ "tags": [
+ {"tagKey": "datasetGroup-1", "tagValue": "datasetGroup-key-1"},
+ ]
+ },
+ )
+
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": dsg_name,
+ "tags": [{"tagKey": "datasetGroup-1", "tagValue": "datasetGroup-key-1"}],
+ }
+ },
+ None,
+ )
+
+ cli = Personalize()
+ arn = DatasetGroup().arn(dsg_name)
+ assert cli.list_tags_for_resource(resourceArn=arn) == {
+ "tags": [{"tagKey": "datasetGroup-1", "tagValue": "datasetGroup-key-1"}]
+ }
diff --git a/source/tests/aws_lambda/create_dataset_import_job/__init__.py b/source/tests/aws_lambda/create_dataset_import_job/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_dataset_import_job/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_dataset_import_job/test_dataset_import_job_handler.py b/source/tests/aws_lambda/create_dataset_import_job/test_dataset_import_job_handler.py
index 30e483e..5d0591f 100644
--- a/source/tests/aws_lambda/create_dataset_import_job/test_dataset_import_job_handler.py
+++ b/source/tests/aws_lambda/create_dataset_import_job/test_dataset_import_job_handler.py
@@ -11,17 +11,134 @@
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
-import pytest
+import os
+import pytest
from aws_lambda.create_dataset_import_job.handler import (
- lambda_handler,
+ CONFIG,
RESOURCE,
STATUS,
- CONFIG,
+ lambda_handler,
)
+from botocore.exceptions import ParamValidationError
+from moto import mock_sts
+from shared.exceptions import ResourcePending
+from shared.resource import Dataset, DatasetGroup, DatasetImportJob
+
+dataset_name = "mockDataset"
+dataset_arn = Dataset().arn(dataset_name)
+dataset_import_arn = DatasetImportJob().arn("mockDatasetImport")
+dataset_group_arn = DatasetGroup().arn("mockDatasetGroup")
def test_create_dataset_import_job_handler(validate_handler_config):
validate_handler_config(RESOURCE, CONFIG, STATUS)
with pytest.raises(ValueError):
lambda_handler({}, None)
+
+
+@mock_sts
+def test_data_import_tags(mocker, personalize_stubber, notifier_stubber):
+ os.environ["ROLE_ARN"] = "roleArn"
+ dataset_arn = Dataset().arn(dataset_name)
+ dataset_import_arn = DatasetImportJob().arn("mockDatasetImport")
+
+ personalize_stubber.add_response(
+ method="list_dataset_import_jobs",
+ expected_params={"datasetArn": dataset_arn},
+ service_response={"datasetImportJobs": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_dataset_import_job",
+ expected_params={
+ "jobName": dataset_name,
+ "datasetArn": dataset_arn,
+ "dataSource": {"dataLocation": "s3://path/to/file"},
+ "roleArn": os.getenv("ROLE_ARN"),
+ "tags": [
+ {"tagKey": "datasetImport-1", "tagValue": "datasetImport-key-1"},
+ ],
+ "importMode": "FULL",
+ "publishAttributionMetricsToS3": True,
+ },
+ service_response={"datasetImportJobArn": dataset_import_arn},
+ )
+
+ mocker.patch("shared.s3.S3.exists", True)
+
+ with pytest.raises(ResourcePending):
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "jobName": dataset_name,
+ "datasetArn": dataset_arn,
+ "dataSource": {"dataLocation": "s3://path/to/file"},
+ "roleArn": os.getenv("ROLE_ARN"),
+ "tags": [
+ {"tagKey": "datasetImport-1", "tagValue": "datasetImport-key-1"},
+ ],
+ "importMode": "FULL",
+ "publishAttributionMetricsToS3": True,
+ }
+ },
+ None,
+ )
+
+ assert notifier_stubber.has_notified_for_creation
+ assert notifier_stubber.latest_notification_status == "CREATING"
+
+ del os.environ["ROLE_ARN"]
+
+
+@mock_sts
+def test_bad_data_import_tags(mocker, personalize_stubber):
+ dataset_arn = Dataset().arn(dataset_name)
+ dataset_import_arn = DatasetImportJob().arn("mockDatasetImport")
+
+ os.environ["ROLE_ARN"] = "roleArn"
+
+ personalize_stubber.add_response(
+ method="list_dataset_import_jobs",
+ expected_params={"datasetArn": dataset_arn},
+ service_response={"datasetImportJobs": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_dataset_import_job",
+ expected_params={
+ "jobName": dataset_name,
+ "datasetArn": dataset_arn,
+ "dataSource": {"dataLocation": "s3://path/to/file"},
+ "roleArn": os.getenv("ROLE_ARN"),
+ "tags": "bad data",
+ "importMode": "FULL",
+ "publishAttributionMetricsToS3": True,
+ },
+ service_response={"datasetImportJobArn": dataset_import_arn},
+ )
+
+ mocker.patch("shared.s3.S3.exists", True)
+
+ try:
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "jobName": dataset_name,
+ "datasetArn": dataset_arn,
+ "dataSource": {"dataLocation": "s3://path/to/file"},
+ "roleArn": os.getenv("ROLE_ARN"),
+ "tags": "bad data",
+ "importMode": "FULL",
+ "publishAttributionMetricsToS3": True,
+ }
+ },
+ None,
+ )
+ except ParamValidationError as exp:
+ assert (
+ exp.kwargs["report"]
+ == "Invalid type for parameter tags, value: bad data, type: , valid types: , "
+ )
+
+ del os.environ["ROLE_ARN"]
diff --git a/source/tests/aws_lambda/create_event_tracker/__init__.py b/source/tests/aws_lambda/create_event_tracker/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_event_tracker/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_event_tracker/test_create_event_tracker_handler.py b/source/tests/aws_lambda/create_event_tracker/test_create_event_tracker_handler.py
index 449ceea..352b707 100644
--- a/source/tests/aws_lambda/create_event_tracker/test_create_event_tracker_handler.py
+++ b/source/tests/aws_lambda/create_event_tracker/test_create_event_tracker_handler.py
@@ -11,17 +11,108 @@
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
-import pytest
+import os
+import pytest
from aws_lambda.create_event_tracker.handler import (
- lambda_handler,
+ CONFIG,
RESOURCE,
STATUS,
- CONFIG,
+ lambda_handler,
)
+from botocore.exceptions import ParamValidationError
+from moto import mock_sts
+from shared.exceptions import ResourcePending
+from shared.resource import DatasetGroup, EventTracker
+
+etracker_name = "mockEventTracker"
+event_tracker_arn = EventTracker().arn(etracker_name)
+dataset_group_arn = DatasetGroup().arn("mockDatasetGroup")
def test_create_event_tracker(validate_handler_config):
validate_handler_config(RESOURCE, CONFIG, STATUS)
with pytest.raises(ValueError):
lambda_handler({}, None)
+
+
+@mock_sts
+def test_event_tracker_tags(personalize_stubber, notifier_stubber):
+ event_tracker_arn = EventTracker().arn(etracker_name)
+ dataset_group_arn = DatasetGroup().arn("mockDatasetGroup")
+
+ personalize_stubber.add_response(
+ method="list_event_trackers",
+ expected_params={
+ "datasetGroupArn": dataset_group_arn,
+ },
+ service_response={"eventTrackers": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_event_tracker",
+ expected_params={
+ "name": etracker_name,
+ "datasetGroupArn": dataset_group_arn,
+ "tags": [
+ {"tagKey": "et-1", "tagValue": "et-key-1"},
+ ],
+ },
+ service_response={"eventTrackerArn": event_tracker_arn},
+ )
+
+ with pytest.raises(ResourcePending):
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": etracker_name,
+ "datasetGroupArn": dataset_group_arn,
+ "tags": [{"tagKey": "et-1", "tagValue": "et-key-1"}],
+ }
+ },
+ None,
+ )
+
+ assert notifier_stubber.has_notified_for_creation
+ assert notifier_stubber.latest_notification_status == "CREATING"
+
+
+@mock_sts
+def test_bad_event_tracker_tags(personalize_stubber):
+ event_tracker_arn = EventTracker().arn(etracker_name)
+ dataset_group_arn = DatasetGroup().arn("mockDatasetGroup")
+
+ personalize_stubber.add_response(
+ method="list_event_trackers",
+ expected_params={
+ "datasetGroupArn": dataset_group_arn,
+ },
+ service_response={"eventTrackers": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_event_tracker",
+ expected_params={
+ "name": etracker_name,
+ "datasetGroupArn": dataset_group_arn,
+ "tags": "bad data",
+ },
+ service_response={"eventTrackerArn": event_tracker_arn},
+ )
+
+ try:
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": etracker_name,
+ "datasetGroupArn": dataset_group_arn,
+ "tags": "bad data",
+ }
+ },
+ None,
+ )
+ except ParamValidationError as exp:
+ assert (
+ exp.kwargs["report"]
+ == "Invalid type for parameter tags, value: bad data, type: , valid types: , "
+ )
diff --git a/source/tests/aws_lambda/create_filter/__init__.py b/source/tests/aws_lambda/create_filter/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_filter/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_filter/test_create_filter_handler.py b/source/tests/aws_lambda/create_filter/test_create_filter_handler.py
index f605cdc..d32a1a9 100644
--- a/source/tests/aws_lambda/create_filter/test_create_filter_handler.py
+++ b/source/tests/aws_lambda/create_filter/test_create_filter_handler.py
@@ -11,17 +11,101 @@
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
+import os
+
import pytest
+from aws_lambda.create_filter.handler import CONFIG, RESOURCE, STATUS, lambda_handler
+from botocore.exceptions import ParamValidationError
+from moto import mock_sts
+from shared.exceptions import ResourcePending
+from shared.resource import DatasetGroup, Filter
-from aws_lambda.create_filter.handler import (
- lambda_handler,
- RESOURCE,
- STATUS,
- CONFIG,
-)
+filter_name = "mockFilter"
def test_create_filter(validate_handler_config):
validate_handler_config(RESOURCE, CONFIG, STATUS)
with pytest.raises(ValueError):
lambda_handler({}, None)
+
+
+@mock_sts
+def test_filter_tags(personalize_stubber, notifier_stubber):
+ filter_arn = Filter().arn(filter_name)
+ dataset_group_arn = DatasetGroup().arn("mockDatasetGroup")
+
+ personalize_stubber.add_client_error(
+ method="describe_filter",
+ service_error_code="ResourceNotFoundException",
+ expected_params={"filterArn": filter_arn},
+ )
+
+ personalize_stubber.add_response(
+ method="create_filter",
+ expected_params={
+ "name": filter_name,
+ "datasetGroupArn": dataset_group_arn,
+ "filterExpression": "SOME-EXPRESSION",
+ "tags": [
+ {"tagKey": "filter-1", "tagValue": "filter-key-1"},
+ ],
+ },
+ service_response={"filterArn": filter_arn},
+ )
+
+ with pytest.raises(ResourcePending):
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": filter_name,
+ "datasetGroupArn": dataset_group_arn,
+ "filterExpression": "SOME-EXPRESSION",
+ "tags": [{"tagKey": "filter-1", "tagValue": "filter-key-1"}],
+ }
+ },
+ None,
+ )
+
+ assert notifier_stubber.has_notified_for_creation
+ assert notifier_stubber.latest_notification_status == "CREATING"
+
+
+@mock_sts
+def test_bad_filter_tags(personalize_stubber):
+ filter_arn = Filter().arn(filter_name)
+ dataset_group_arn = DatasetGroup().arn("mockDatasetGroup")
+
+ personalize_stubber.add_client_error(
+ method="describe_filter",
+ service_error_code="ResourceNotFoundException",
+ expected_params={"filterArn": filter_arn},
+ )
+
+ personalize_stubber.add_response(
+ method="create_filter",
+ expected_params={
+ "name": filter_name,
+ "datasetGroupArn": dataset_group_arn,
+ "filterExpression": "SOME-EXPRESSION",
+ "tags": "bad data",
+ },
+ service_response={"filterArn": filter_arn},
+ )
+
+ try:
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": filter_name,
+ "datasetGroupArn": dataset_group_arn,
+ "filterExpression": "SOME-EXPRESSION",
+ "tags": "bad data",
+ }
+ },
+ None,
+ )
+ except ParamValidationError as exp:
+ assert (
+ exp.kwargs["report"]
+ == "Invalid type for parameter tags, value: bad data, type: , valid types: , "
+ )
diff --git a/source/tests/aws_lambda/create_recommender/__init__.py b/source/tests/aws_lambda/create_recommender/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_recommender/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_recommender/test_create_recommender_handler.py b/source/tests/aws_lambda/create_recommender/test_create_recommender_handler.py
index 4352784..d9a3f36 100644
--- a/source/tests/aws_lambda/create_recommender/test_create_recommender_handler.py
+++ b/source/tests/aws_lambda/create_recommender/test_create_recommender_handler.py
@@ -11,17 +11,104 @@
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
-import pytest
+import os
+import pytest
from aws_lambda.create_recommender.handler import (
- lambda_handler,
+ CONFIG,
RESOURCE,
STATUS,
- CONFIG,
+ lambda_handler,
)
+from botocore.exceptions import ParamValidationError
+from moto import mock_sts
+from shared.exceptions import ResourcePending
+from shared.resource import DatasetGroup, Recommender
+
+recommender_name = "recommender-1"
def test_create_recommender_handler(validate_handler_config):
validate_handler_config(RESOURCE, CONFIG, STATUS)
with pytest.raises(ValueError):
lambda_handler({}, None)
+
+
+@mock_sts
+def test_recommender_tags(personalize_stubber, notifier_stubber):
+ recommender_arn = Recommender().arn(recommender_name)
+ dataset_group_arn = DatasetGroup().arn("mockDatasetGroup")
+ personalize_stubber.add_client_error(
+ method="describe_recommender",
+ service_error_code="ResourceNotFoundException",
+ expected_params={"recommenderArn": recommender_arn},
+ )
+
+ personalize_stubber.add_response(
+ method="create_recommender",
+ expected_params={
+ "name": recommender_name,
+ "datasetGroupArn": dataset_group_arn,
+ "recipeArn": "recipeArn",
+ "tags": [
+ {"tagKey": "recommender-1", "tagValue": "recommender-key-1"},
+ ],
+ },
+ service_response={"recommenderArn": recommender_arn},
+ )
+
+ with pytest.raises(ResourcePending):
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": recommender_name,
+ "datasetGroupArn": dataset_group_arn,
+ "recipeArn": "recipeArn",
+ "tags": [{"tagKey": "recommender-1", "tagValue": "recommender-key-1"}],
+ }
+ },
+ None,
+ )
+
+ assert notifier_stubber.has_notified_for_creation
+ assert notifier_stubber.latest_notification_status == "CREATING"
+
+
+@mock_sts
+def test_bad_recommender_tags(personalize_stubber):
+ recommender_arn = Recommender().arn(recommender_name)
+ dataset_group_arn = DatasetGroup().arn("mockDatasetGroup")
+ personalize_stubber.add_client_error(
+ method="describe_recommender",
+ service_error_code="ResourceNotFoundException",
+ expected_params={"recommenderArn": recommender_arn},
+ )
+
+ personalize_stubber.add_response(
+ method="create_recommender",
+ expected_params={
+ "name": recommender_name,
+ "datasetGroupArn": dataset_group_arn,
+ "recipeArn": "recipeArn",
+ "tags": "bad data",
+ },
+ service_response={"recommenderArn": recommender_arn},
+ )
+
+ try:
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": recommender_name,
+ "datasetGroupArn": dataset_group_arn,
+ "recipeArn": "recipeArn",
+ "tags": "bad data",
+ }
+ },
+ None,
+ )
+ except ParamValidationError as exp:
+ assert (
+ exp.kwargs["report"]
+ == "Invalid type for parameter tags, value: bad data, type: , valid types: , "
+ )
diff --git a/source/tests/aws_lambda/create_schema/__init__.py b/source/tests/aws_lambda/create_schema/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_schema/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_schema/create_schema_handler.py b/source/tests/aws_lambda/create_schema/create_schema_handler.py
index 0d2184a..83b2260 100644
--- a/source/tests/aws_lambda/create_schema/create_schema_handler.py
+++ b/source/tests/aws_lambda/create_schema/create_schema_handler.py
@@ -12,12 +12,7 @@
# ######################################################################################################################
import pytest
-
-from aws_lambda.create_schema.handler import (
- lambda_handler,
- RESOURCE,
- CONFIG,
-)
+from aws_lambda.create_schema.handler import CONFIG, RESOURCE, lambda_handler
def test_create_schema_handler(validate_handler_config):
diff --git a/source/tests/aws_lambda/create_solution/__init__.py b/source/tests/aws_lambda/create_solution/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_solution/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_solution/test_create_solution_handler.py b/source/tests/aws_lambda/create_solution/test_create_solution_handler.py
index e4b8b64..e6feb66 100644
--- a/source/tests/aws_lambda/create_solution/test_create_solution_handler.py
+++ b/source/tests/aws_lambda/create_solution/test_create_solution_handler.py
@@ -11,17 +11,101 @@
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
+import os
+
import pytest
+from aws_lambda.create_solution.handler import CONFIG, RESOURCE, STATUS, lambda_handler
+from botocore.exceptions import ParamValidationError
+from moto import mock_sts
+from shared.exceptions import ResourcePending
+from shared.resource import DatasetGroup, Solution
-from aws_lambda.create_solution.handler import (
- lambda_handler,
- RESOURCE,
- STATUS,
- CONFIG,
-)
+solution_name = "mockSolution"
def test_create_solution(validate_handler_config):
validate_handler_config(RESOURCE, CONFIG, STATUS)
with pytest.raises(ValueError):
lambda_handler({}, None)
+
+
+@mock_sts
+def test_solution_tags(personalize_stubber, notifier_stubber):
+ solution_arn = Solution().arn(solution_name)
+ dataset_group_arn = DatasetGroup().arn("mockDatasetGroup")
+
+ personalize_stubber.add_client_error(
+ method="describe_solution",
+ service_error_code="ResourceNotFoundException",
+ expected_params={"solutionArn": solution_arn},
+ )
+
+ personalize_stubber.add_response(
+ method="create_solution",
+ expected_params={
+ "name": solution_name,
+ "recipeArn": "recipeArn",
+ "datasetGroupArn": dataset_group_arn,
+ "tags": [
+ {"tagKey": "solution-1", "tagValue": "solution-key-1"},
+ ],
+ },
+ service_response={"solutionArn": solution_arn},
+ )
+
+ with pytest.raises(ResourcePending):
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": solution_name,
+ "recipeArn": "recipeArn",
+ "datasetGroupArn": dataset_group_arn,
+ "tags": [{"tagKey": "solution-1", "tagValue": "solution-key-1"}],
+ }
+ },
+ None,
+ )
+
+ assert notifier_stubber.has_notified_for_creation
+ assert notifier_stubber.latest_notification_status == "CREATING"
+
+
+@mock_sts
+def test_bad_solution_tags(personalize_stubber):
+ solution_arn = Solution().arn(solution_name)
+ dataset_group_arn = DatasetGroup().arn("mockDatasetGroup")
+
+ personalize_stubber.add_client_error(
+ method="describe_solution",
+ service_error_code="ResourceNotFoundException",
+ expected_params={"solutionArn": solution_arn},
+ )
+
+ personalize_stubber.add_response(
+ method="create_solution",
+ expected_params={
+ "name": solution_name,
+ "recipeArn": "recipeArn",
+ "datasetGroupArn": dataset_group_arn,
+ "tags": "bad data",
+ },
+ service_response={"solutionArn": solution_arn},
+ )
+
+ try:
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "name": solution_name,
+ "recipeArn": "recipeArn",
+ "datasetGroupArn": dataset_group_arn,
+ "tags": "bad data",
+ }
+ },
+ None,
+ )
+ except ParamValidationError as exp:
+ assert (
+ exp.kwargs["report"]
+ == "Invalid type for parameter tags, value: bad data, type: , valid types: , "
+ )
diff --git a/source/tests/aws_lambda/create_solution_version/__init__.py b/source/tests/aws_lambda/create_solution_version/__init__.py
new file mode 100644
index 0000000..ef2f9eb
--- /dev/null
+++ b/source/tests/aws_lambda/create_solution_version/__init__.py
@@ -0,0 +1,12 @@
+# ######################################################################################################################
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
+# #
+# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
+# with the License. You may obtain a copy of the License at #
+# #
+# http://www.apache.org/licenses/LICENSE-2.0 #
+# #
+# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed #
+# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
+# the specific language governing permissions and limitations under the License. #
+# ######################################################################################################################
diff --git a/source/tests/aws_lambda/create_solution_version/test_create_solution_version_handler.py b/source/tests/aws_lambda/create_solution_version/test_create_solution_version_handler.py
index 9e3c1d2..5a2f91e 100644
--- a/source/tests/aws_lambda/create_solution_version/test_create_solution_version_handler.py
+++ b/source/tests/aws_lambda/create_solution_version/test_create_solution_version_handler.py
@@ -11,17 +11,102 @@
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
-import pytest
+import os
+import pytest
from aws_lambda.create_solution_version.handler import (
- lambda_handler,
+ CONFIG,
RESOURCE,
STATUS,
- CONFIG,
+ lambda_handler,
)
+from botocore.exceptions import ParamValidationError
+from moto import mock_sts
+from shared.exceptions import SolutionVersionPending
+from shared.resource import Solution, SolutionVersion
+
+solution_version_name = "abcdefghi" # hash name of the solution_version
def test_create_solution_version_handler(validate_handler_config):
validate_handler_config(RESOURCE, CONFIG, STATUS)
with pytest.raises(ValueError):
lambda_handler({}, None)
+
+
+@mock_sts
+def test_solutionv_tags(personalize_stubber, notifier_stubber):
+ solutionv_arn = SolutionVersion().arn(solution_version_name)
+ solution_arn = Solution().arn("solName")
+
+ personalize_stubber.add_response(
+ method="list_solution_versions",
+ expected_params={"solutionArn": solution_arn},
+ service_response={"solutionVersions": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_solution_version",
+ expected_params={
+ "solutionArn": solution_arn,
+ "trainingMode": "FULL",
+ "tags": [
+ {"tagKey": "solutionVersion-1", "tagValue": "solutionVersion-key-1"},
+ ],
+ },
+ service_response={"solutionVersionArn": solutionv_arn},
+ )
+
+ with pytest.raises(SolutionVersionPending):
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "solutionArn": solution_arn,
+ "trainingMode": "FULL",
+ "tags": [{"tagKey": "solutionVersion-1", "tagValue": "solutionVersion-key-1"}],
+ }
+ },
+ None,
+ )
+
+ assert notifier_stubber.has_notified_for_creation
+ assert notifier_stubber.latest_notification_status == "CREATING"
+
+
+@mock_sts
+def test_solutionv_bad_tags(personalize_stubber):
+ solutionv_arn = SolutionVersion().arn(solution_version_name)
+ solution_arn = Solution().arn("solName")
+
+ personalize_stubber.add_response(
+ method="list_solution_versions",
+ expected_params={"solutionArn": solution_arn},
+ service_response={"solutionVersions": []},
+ )
+
+ personalize_stubber.add_response(
+ method="create_solution_version",
+ expected_params={
+ "solutionArn": solution_arn,
+ "trainingMode": "FULL",
+ "tags": "bad data",
+ },
+ service_response={"solutionVersionArn": solutionv_arn},
+ )
+
+ try:
+ lambda_handler(
+ {
+ "serviceConfig": {
+ "solutionArn": solution_arn,
+ "trainingMode": "FULL",
+ "tags": "bad data",
+ }
+ },
+ None,
+ )
+ except ParamValidationError as exp:
+ assert (
+ exp.kwargs["report"]
+ == "Invalid type for parameter tags, value: bad data, type: , valid types: , "
+ )
diff --git a/source/tests/aws_lambda/s3_event/test_s3_event_handler.py b/source/tests/aws_lambda/s3_event/test_s3_event_handler.py
index 381015f..7f733d7 100644
--- a/source/tests/aws_lambda/s3_event/test_s3_event_handler.py
+++ b/source/tests/aws_lambda/s3_event/test_s3_event_handler.py
@@ -15,10 +15,9 @@
import boto3
import pytest
-from moto import mock_s3, mock_stepfunctions, mock_sns, mock_sts
-
from aws_lambda.s3_event.handler import lambda_handler
from aws_solutions.core.helpers import _helpers_service_clients
+from moto import mock_s3, mock_sns, mock_stepfunctions, mock_sts
@pytest.fixture
@@ -169,3 +168,35 @@ def test_s3_event_handler_bad_config(s3_event, sns_mocked, s3_mocked, stepfuncti
stateMachineArn=environ.get("STATE_MACHINE_ARN"),
)
assert len(executions["executions"]) == 0
+
+
+@mock_sts
+def test_s3_event_handler_bad_tags(s3_event, sns_mocked, s3_mocked, stepfunctions_mocked):
+ s3_mocked.put_object(
+ Bucket="bucket-name",
+ Key="train/object-key.json",
+ Body=json.dumps({"tags": [{"tagKeys": "tagKey", "tagValue": "tagValue"}]}),
+ )
+ lambda_handler(s3_event, None)
+
+ # ensure no executions started
+ executions = stepfunctions_mocked.list_executions(
+ stateMachineArn=environ.get("STATE_MACHINE_ARN"),
+ )
+ assert len(executions["executions"]) == 0
+
+
+@mock_sts
+def test_s3_event_handler_more_bad_tags(s3_event, sns_mocked, s3_mocked, stepfunctions_mocked):
+ s3_mocked.put_object(
+ Bucket="bucket-name",
+ Key="train/object-key.json",
+ Body=json.dumps({"tags": "bad data"}),
+ )
+ lambda_handler(s3_event, None)
+
+ # ensure no executions started
+ executions = stepfunctions_mocked.list_executions(
+ stateMachineArn=environ.get("STATE_MACHINE_ARN"),
+ )
+ assert len(executions["executions"]) == 0
diff --git a/source/tests/aws_lambda/sns_notification/test_sns_notification.py b/source/tests/aws_lambda/sns_notification/test_sns_notification.py
index 185707a..70b757c 100644
--- a/source/tests/aws_lambda/sns_notification/test_sns_notification.py
+++ b/source/tests/aws_lambda/sns_notification/test_sns_notification.py
@@ -16,9 +16,8 @@
import boto3
import pytest
-from moto import mock_sns, mock_sqs
-
from aws_lambda.sns_notification.handler import lambda_handler
+from moto import mock_sns, mock_sqs
TRACE_ID = "1-57f5498f-d91047849216d0f2ea3b6442"
@@ -30,7 +29,6 @@ def sqs_mock():
with mock_sqs():
with mock_sns():
-
cli = boto3.client("sns")
cli.create_topic(Name=topic_name)
@@ -77,7 +75,10 @@ def test_sns_notification(context, sqs_mock):
url = sqs_mock.get_queue_url(QueueName="TestQueue")["QueueUrl"]
msg = json.loads(
json.loads(
- sqs_mock.receive_message(QueueUrl=url, MaxNumberOfMessages=1,)["Messages"][
+ sqs_mock.receive_message(
+ QueueUrl=url,
+ MaxNumberOfMessages=1,
+ )["Messages"][
0
]["Body"]
)["Message"]
@@ -111,7 +112,10 @@ def test_sns_notification_trace(sqs_mock, trace_enabled, context):
url = sqs_mock.get_queue_url(QueueName="TestQueue")["QueueUrl"]
msg = json.loads(
json.loads(
- sqs_mock.receive_message(QueueUrl=url, MaxNumberOfMessages=1,)["Messages"][
+ sqs_mock.receive_message(
+ QueueUrl=url,
+ MaxNumberOfMessages=1,
+ )["Messages"][
0
]["Body"]
)["Message"]
diff --git a/source/tests/aws_lambda/test_personalize_service.py b/source/tests/aws_lambda/test_personalize_service.py
index c0c09ac..becdd1c 100644
--- a/source/tests/aws_lambda/test_personalize_service.py
+++ b/source/tests/aws_lambda/test_personalize_service.py
@@ -17,17 +17,17 @@
import boto3
import pytest
-from dateutil import tz
-from dateutil.tz import tzlocal
-from moto import mock_s3, mock_sts
-
from aws_lambda.shared.personalize_service import (
S3,
- Personalize,
Configuration,
+ Personalize,
get_duplicates,
)
-from shared.exceptions import ResourceNeedsUpdate, ResourceFailed
+from dateutil import tz
+from dateutil.tz import tzlocal
+from moto import mock_s3, mock_sts
+from moto.core import ACCOUNT_ID
+from shared.exceptions import ResourceFailed, ResourceNeedsUpdate
from shared.personalize.service_model import ServiceModel
from shared.resource import Campaign
@@ -58,7 +58,6 @@ def describe_solution_version_response():
"solutionVersionArn": f'arn:aws:personalize:us-east-1:{"1" * 12}:solution/personalize-integration-test-ranking/dfcd6f6e',
"solutionArn": f'arn:aws:personalize:us-east-1:{"1" * 12}:solution/personalize-integration-test-ranking',
"performHPO": False,
- "performAutoML": False,
"recipeArn": "arn:aws:personalize:::recipe/aws-user-personalization",
"datasetGroupArn": f'arn:aws:personalize:us-east-1:{"1" * 12}:dataset-group/personalize-integration-test',
"solutionConfig": {},
@@ -239,11 +238,11 @@ def test_service_model(personalize_stubber):
)
sm = ServiceModel(cli)
-
assert sm.owned_by(filter_arn_1, dataset_group_arn_1)
assert sm.owned_by(campaign_arn_1, dataset_group_name_1)
assert sm.owned_by(filter_arn_2, dataset_group_arn_2)
assert sm.owned_by(campaign_arn_2, dataset_group_name_2)
+
for arn in [
dataset_group_arn_1,
dataset_group_arn_2,
@@ -265,6 +264,14 @@ def test_configuration_valid(configuration_path):
assert validates
+@mock_sts
+def test_configuration_valid(tags_configuration_path):
+ cfg = Configuration()
+ cfg.load(tags_configuration_path)
+ validates = cfg.validate()
+ assert validates
+
+
@mock_sts
def test_configuration_empty(config_empty):
cfg = Configuration()
@@ -492,6 +499,9 @@ def test_solution_version_update_validation():
"serviceConfig": {
"name": "valid",
"recipeArn": "arn:aws:personalize:::recipe/aws-sims",
+ "solutionVersion": {
+ "tags": [{"tagKey": "solv-2", "tagValue": "solv-key-2"}],
+ },
},
"workflowConfig": {
"schedules": {
@@ -503,6 +513,7 @@ def test_solution_version_update_validation():
"serviceConfig": {
"name": "valid",
"recipeArn": "arn:aws:personalize:::recipe/aws-hrnn-coldstart",
+ "tags": [{"tagKey": "sol-3", "tagValue": "sol-key-3"}],
},
"workflowConfig": {
"schedules": {
@@ -528,3 +539,566 @@ def test_solution_version_update_validation():
cfg._validate_solution_update()
assert len(cfg._configuration_errors) == 1
assert cfg._configuration_errors[0].startswith("solution invalid does not support")
+
+
+@mock_sts
+def test_dataset_defaults(configuration_path):
+ """
+ Ensures that defaults are set for the fields for step-functions to pass.
+ """
+ cfg = Configuration()
+ cfg.load(configuration_path)
+
+ validates = cfg.validate()
+ assert validates
+ assert len(cfg._configuration_errors) == 0
+
+ # datasetGroup defaults
+ assert cfg.config_dict["datasetGroup"]["serviceConfig"]["tags"] == []
+
+ # dataset-import defaults
+ assert cfg.config_dict["datasets"]["serviceConfig"]["importMode"] == "FULL"
+ assert cfg.config_dict["datasets"]["serviceConfig"]["tags"] == []
+
+ assert cfg.config_dict["datasets"]["serviceConfig"]["publishAttributionMetricsToS3"] == False
+
+ # dataset defaults
+ assert cfg.config_dict["datasets"]["users"]["dataset"]["serviceConfig"]["tags"] == []
+ assert cfg.config_dict["datasets"]["interactions"]["dataset"]["serviceConfig"]["tags"] == []
+ assert cfg.config_dict["datasets"]["items"]["dataset"]["serviceConfig"]["tags"] == []
+
+ # solutions default
+ assert cfg.config_dict["solutions"][0]["serviceConfig"]["tags"] == []
+ assert cfg.config_dict["solutions"][0]["serviceConfig"]["solutionVersion"]["tags"] == []
+ assert cfg.config_dict["solutions"][0]["serviceConfig"]["solutionVersion"]["trainingMode"] == "FULL"
+
+ assert cfg.config_dict["solutions"][1]["serviceConfig"]["tags"] == []
+ assert cfg.config_dict["solutions"][1]["serviceConfig"]["solutionVersion"]["tags"] == []
+ assert cfg.config_dict["solutions"][1]["serviceConfig"]["solutionVersion"]["trainingMode"] == "FULL"
+
+ # batchSegment defaults
+ assert cfg.config_dict["solutions"][0]["batchSegmentJobs"][0]["serviceConfig"]["tags"] == []
+
+ # campaign defaults
+ assert cfg.config_dict["solutions"][5]["campaigns"][0]["serviceConfig"]["tags"] == []
+
+ # batchInference defaults
+ assert cfg.config_dict["solutions"][5]["batchInferenceJobs"][0]["serviceConfig"]["tags"] == []
+
+ # eventTracker defaults
+ assert cfg.config_dict["eventTracker"]["serviceConfig"]["tags"] == []
+
+ # filter defaults
+ assert cfg.config_dict["filters"][0]["serviceConfig"]["tags"] == []
+
+
+@mock_sts
+def test_dataset_root_tags(root_tags_configuration_path):
+ """
+ Ensures that the root tags are set across all components.
+ """
+ cfg = Configuration()
+ cfg.load(root_tags_configuration_path)
+
+ validates = cfg.validate()
+ assert validates
+ assert len(cfg._configuration_errors) == 0
+
+ # datasetGroup defaults
+ assert cfg.config_dict["datasetGroup"]["serviceConfig"]["tags"] == [{"tagKey": "hello", "tagValue": "world"}]
+
+ # dataset-import defaults
+ assert cfg.config_dict["datasets"]["serviceConfig"]["importMode"] == "FULL"
+ assert cfg.config_dict["datasets"]["serviceConfig"]["tags"] == [{"tagKey": "hello", "tagValue": "world"}]
+
+ assert cfg.config_dict["datasets"]["serviceConfig"]["publishAttributionMetricsToS3"] == False
+
+ # dataset defaults
+ assert cfg.config_dict["datasets"]["users"]["dataset"]["serviceConfig"]["tags"] == [
+ {"tagKey": "hello", "tagValue": "world"}
+ ]
+ assert cfg.config_dict["datasets"]["interactions"]["dataset"]["serviceConfig"]["tags"] == [
+ {"tagKey": "hello", "tagValue": "world"}
+ ]
+ assert cfg.config_dict["datasets"]["items"]["dataset"]["serviceConfig"]["tags"] == [
+ {"tagKey": "hello", "tagValue": "world"}
+ ]
+
+ # solutions default
+ assert cfg.config_dict["solutions"][0]["serviceConfig"]["tags"] == [{"tagKey": "hello", "tagValue": "world"}]
+ assert cfg.config_dict["solutions"][0]["serviceConfig"]["solutionVersion"]["tags"] == [
+ {"tagKey": "hello", "tagValue": "world"}
+ ]
+ assert cfg.config_dict["solutions"][0]["serviceConfig"]["solutionVersion"]["trainingMode"] == "FULL"
+
+ assert cfg.config_dict["solutions"][1]["serviceConfig"]["tags"] == [{"tagKey": "hello", "tagValue": "world"}]
+ assert cfg.config_dict["solutions"][1]["serviceConfig"]["solutionVersion"]["tags"] == [
+ {"tagKey": "hello", "tagValue": "world"}
+ ]
+ assert cfg.config_dict["solutions"][1]["serviceConfig"]["solutionVersion"]["trainingMode"] == "FULL"
+
+ # batchSegment defaults
+ assert cfg.config_dict["solutions"][0]["batchSegmentJobs"][0]["serviceConfig"]["tags"] == [
+ {"tagKey": "hello", "tagValue": "world"}
+ ]
+
+ # campaign defaults
+ assert cfg.config_dict["solutions"][1]["campaigns"][0]["serviceConfig"]["tags"] == [
+ {"tagKey": "hello", "tagValue": "world"}
+ ]
+
+ # batchInference defaults
+ assert cfg.config_dict["solutions"][1]["batchInferenceJobs"][0]["serviceConfig"]["tags"] == [
+ {"tagKey": "hello", "tagValue": "world"}
+ ]
+
+ # eventTracker defaults
+ assert cfg.config_dict["eventTracker"]["serviceConfig"]["tags"] == [{"tagKey": "hello", "tagValue": "world"}]
+
+ # filter defaults
+ assert cfg.config_dict["filters"][0]["serviceConfig"]["tags"] == [{"tagKey": "hello", "tagValue": "world"}]
+
+
+@mock_sts
+def test_bad_root_tag_keys():
+ cfg = Configuration()
+ config = """
+ {
+ "tags": [{"tagKeys": "tagKey", "tagValue": "tagValue"}],
+ "datasetGroup": {"serviceConfig": {"name": "testing-tags"}}
+ }
+ """
+ cfg.load(str(config))
+
+ validates = cfg.validate()
+ assert cfg._configuration_errors == ["Parameter validation failed: Tag keys must be one of: 'tagKey', 'tagValue'"]
+ assert validates == False
+
+
+@mock_sts
+def test_bad_tag_keys():
+ cfg = Configuration()
+ config = """{
+ "datasetGroup": {
+ "serviceConfig": {"name": "testing-tags", "tags": [{"tagKeys": "tagKey", "tagValue": "tagValue"}]}
+ }
+ }
+ """
+
+ cfg.load(str(config))
+ validates = cfg.validate()
+
+ assert cfg._configuration_errors == [
+ 'Parameter validation failed: Missing required parameter in tags[0]: "tagKey" Unknown parameter in tags[0]: "tagKeys", must be one of: tagKey, tagValue'
+ ]
+ assert validates == False
+
+
+@mock_sts
+def test_more_bad_root_tag_keys():
+ cfg = Configuration()
+ config = """
+ {
+ "tags": {},
+ "datasetGroup": {"serviceConfig": {"name": "testing-tags"}}
+ }
+ """
+ cfg.load(str(config))
+ validates = cfg.validate()
+
+ assert cfg._configuration_errors == ["Invalid type at path root for tags, expected list[dict]."]
+ assert validates == False
+
+
+@mock_sts
+def test_more_bad_tag_keys():
+ cfg = Configuration()
+ config = """
+ {
+
+ "datasetGroup": {"serviceConfig": {"name": "testing-tags", "tags": {}}}
+ }
+ """
+ cfg.load(str(config))
+
+ validates = cfg.validate()
+ print(cfg._configuration_errors)
+
+ assert cfg._configuration_errors == [
+ "Parameter validation failed: Invalid type for parameter tags, value: {}, type: , valid types: , "
+ ]
+ assert validates == False
+
+
+@mock_sts
+def test_root_tag_keys():
+ cfg = Configuration()
+ config = """
+ {
+ "tags": [{"tagKey": "tagKey", "tagValue": "tagValue"}],
+ "datasetGroup": {"serviceConfig": {"name": "testing-tags"}}
+ }
+ """
+ cfg.load(str(config))
+
+ validates = cfg.validate()
+
+ assert cfg._configuration_errors == []
+ assert validates
+
+
+@mock_sts
+def test_tag_keys():
+ cfg = Configuration()
+ config = """{
+ "datasetGroup": {
+ "serviceConfig": {"name": "testing-tags", "tags": [{"tagKey": "tagKey", "tagValue": "tagValue"}]}
+ }
+ }
+ """
+ cfg.load(str(config))
+
+ validates = cfg.validate()
+
+ assert cfg._configuration_errors == []
+ assert validates
+
+
+@mock_sts
+def test_dataset_group_args(tags_configuration_path, monkeypatch, argtest):
+ """
+ Ensuring params to validation calls are as expected per the config supplied.
+ """
+ cfg = Configuration()
+ cfg.load(tags_configuration_path)
+
+ # returns arguments passed to mocked calls
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+
+ validates = cfg._validate_dataset_group()
+ assert validates is None
+ assert len(cfg._configuration_errors) == 0
+ assert argtest.args[1] == {"name": "unit_test_new_datasetgroup", "tags": [{"tagKey": "tag0", "tagValue": "key0"}]}
+
+
+@mock_sts
+def test_dataset_args(tags_configuration_path, monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(tags_configuration_path)
+
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+
+ cfg._validate_datasets()
+ assert len(cfg._configuration_errors) == 0
+ assert argtest.args[1] == {
+ "name": "unit_test_only_interactions",
+ "tags": [{"tagKey": "tag3", "tagValue": "key3"}],
+ "datasetGroupArn": f"arn:aws:personalize:us-east-1:{ACCOUNT_ID}:dataset-group/validation",
+ "schemaArn": f"arn:aws:personalize:us-east-1:{ACCOUNT_ID}:schema/validation",
+ "datasetType": "interactions",
+ }
+
+
+@mock_sts
+def test_dataset_import_args(monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(
+ """
+ {
+ "datasetGroup": {"serviceConfig": {"name": "unit_test_new_datasetgroup"}},
+ "datasets": {
+ "serviceConfig": {
+ "name": "dataset_import_config",
+ "importMode": "FULL",
+ "tags": [{"tagKey": "1", "tagValue": "1"}]
+ }
+ }
+ }
+ """
+ )
+
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+
+ cfg._validate_dataset_import_job()
+ assert len(cfg._configuration_errors) == 0
+ assert argtest.args[1] == {
+ "name": "dataset_import_config",
+ "importMode": "FULL",
+ "tags": [{"tagKey": "1", "tagValue": "1"}],
+ }
+
+
+@mock_sts
+def test_solution_version_args(monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(
+ """
+ {
+ "datasetGroup": {"serviceConfig": {"name": "unit_test_new_datasetgroup"}},
+ "solutions": [
+ {
+ "serviceConfig": {
+ "name": "unit_test_new_solution",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-item-affinity",
+ "solutionVersion": {
+ "trainingMode": "FULL",
+ "tags": [{"tagKey": "1", "tagValue": "2"}]
+ }
+ }
+ }
+ ]
+ }
+ """
+ )
+
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+ cfg._validate_solution_version(cfg.config_dict["solutions"][0]["serviceConfig"])
+ assert len(cfg._configuration_errors) == 0
+ assert argtest.args[1] == {"trainingMode": "FULL", "tags": [{"tagKey": "1", "tagValue": "2"}]}
+
+
+@mock_sts
+def test_solution_version_unsupported_args(monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(
+ """
+ {
+ "datasetGroup": {"serviceConfig": {"name": "unit_test_new_datasetgroup"}},
+ "solutions": [
+ {
+ "serviceConfig": {
+ "recipeArn": "arn:aws:personalize:::recipe/aws-item-affinity",
+ "solutionVersion": {
+ "name": "SolutionV1",
+ "tags": [{"tagKey": "1", "tagValue": "2"}]
+ }
+ }
+ }
+ ]
+ }
+ """
+ )
+
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+ cfg._validate_solution_version(cfg.config_dict["solutions"][0]["serviceConfig"])
+ assert argtest.args[1] == {"name": "SolutionV1", "tags": [{"tagKey": "1", "tagValue": "2"}]}
+ assert cfg._configuration_errors == [
+ "Allowed keys for solutionVersion are: ['trainingMode', 'tags']. Unsupported key(s): ['name']"
+ ]
+
+
+@mock_sts
+def test_batch_inference_args(monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(
+ """
+ {
+ "datasetGroup": {"serviceConfig": {"name": "unit_test_new_datasetgroup"}},
+ "solutions": [
+ {
+ "serviceConfig": {
+ "name": "unit_test_new_solution",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-item-affinity"
+ },
+ "batchInferenceJobs": [{"serviceConfig": {
+ "tags": [{"tagKey": "tag1", "tagValue": "key1"}]
+ }}]
+ }
+ ]
+ }
+ """
+ )
+
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+ solution = cfg.config_dict["solutions"][0]
+
+ cfg._validate_batch_inference_jobs(
+ "solutions[0].batchInferenceJobs",
+ solution["serviceConfig"]["name"],
+ solution["batchInferenceJobs"],
+ )
+ assert cfg._configuration_errors == []
+
+ args = argtest.args[1]
+ assert args["solutionVersionArn"] == f"arn:aws:personalize:us-east-1:{ACCOUNT_ID}:solution/validation/unknown"
+ assert args["jobName"].startswith("batch_" + solution["serviceConfig"]["name"])
+ assert args["roleArn"] == "roleArn"
+ assert args["jobInput"] == {"s3DataSource": {"path": "s3://data-source"}}
+ assert args["jobOutput"] == {"s3DataDestination": {"path": "s3://data-destination"}}
+ assert args["tags"] == [{"tagKey": "tag1", "tagValue": "key1"}]
+
+
+@mock_sts
+def test_campaign_args(monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(
+ """
+ {
+ "datasetGroup": {"serviceConfig": {"name": "unit_test_new_datasetgroup"}},
+ "solutions": [
+ {
+ "serviceConfig": {
+ "name": "unit_test_new_solution",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-item-affinity"
+ },
+ "campaigns": [{"serviceConfig": {"name": "campaign1", "tags": [{"tagKey": "tag1", "tagValue": "key1"}]}}]
+ }
+ ]
+ }
+ """
+ )
+
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+ solution = cfg.config_dict["solutions"][0]
+
+ cfg._validate_campaigns(f"solutions[0].campaigns", solution["campaigns"])
+ assert cfg._configuration_errors == []
+ assert argtest.args[1] == {
+ "name": "campaign1",
+ "tags": [{"tagKey": "tag1", "tagValue": "key1"}],
+ "solutionVersionArn": f"arn:aws:personalize:us-east-1:{ACCOUNT_ID}:solution/validation/unknown",
+ }
+
+
+@mock_sts
+def test_batch_segment_args(monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(
+ """
+ {
+ "datasetGroup": {"serviceConfig": {"name": "unit_test_new_datasetgroup"}},
+ "solutions": [
+ {
+ "serviceConfig": {
+ "name": "unit_test_new_solution",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-item-affinity"
+ },
+ "batchSegmentJobs": [{"serviceConfig": {
+ "tags": [{"tagKey": "tag1", "tagValue": "key1"}]
+ }}]
+ }
+ ]
+ }
+ """
+ )
+
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+ solution = cfg.config_dict["solutions"][0]
+
+ cfg._validate_batch_inference_jobs(
+ "solutions[0].batchSegmentJobs",
+ solution["serviceConfig"]["name"],
+ solution["batchSegmentJobs"],
+ )
+ assert cfg._configuration_errors == []
+
+ args = argtest.args[1]
+ assert args["solutionVersionArn"] == f"arn:aws:personalize:us-east-1:{ACCOUNT_ID}:solution/validation/unknown"
+ assert args["jobName"].startswith("batch_" + solution["serviceConfig"]["name"])
+ assert args["roleArn"] == "roleArn"
+ assert args["jobInput"] == {"s3DataSource": {"path": "s3://data-source"}}
+ assert args["jobOutput"] == {"s3DataDestination": {"path": "s3://data-destination"}}
+ assert args["tags"] == [{"tagKey": "tag1", "tagValue": "key1"}]
+
+
+@mock_sts
+def test_batch_inference_args(monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(
+ """
+ {
+ "datasetGroup": {"serviceConfig": {"name": "unit_test_new_datasetgroup"}},
+ "solutions": [
+ {
+ "serviceConfig": {
+ "name": "unit_test_new_solution",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-item-affinity"
+ },
+ "batchInferenceJobs": [{"serviceConfig": {"tags": [{"tagKey": "tag1", "tagValue": "key1"}]}}]
+ }
+ ]
+ }
+ """
+ )
+
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+ solution = cfg.config_dict["solutions"][0]
+
+ cfg._validate_batch_inference_jobs(
+ "solutions[0].batchInferenceJobs",
+ solution["serviceConfig"]["name"],
+ solution["batchInferenceJobs"],
+ )
+ assert cfg._configuration_errors == []
+
+ args = argtest.args[1]
+ assert args["solutionVersionArn"] == f"arn:aws:personalize:us-east-1:{ACCOUNT_ID}:solution/validation/unknown"
+ assert args["jobName"].startswith("batch_" + solution["serviceConfig"]["name"])
+ assert args["roleArn"] == "roleArn"
+ assert args["jobInput"] == {"s3DataSource": {"path": "s3://data-source"}}
+ assert args["jobOutput"] == {"s3DataDestination": {"path": "s3://data-destination"}}
+ assert args["tags"] == [{"tagKey": "tag1", "tagValue": "key1"}]
+
+
+def test_recommender_args(tags_configuration_path, monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(tags_configuration_path)
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+
+ cfg._validate_recommender()
+ assert len(cfg._configuration_errors) == 0
+
+ assert argtest.args[1] == {
+ "name": "ddsg-most-viewed",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-ecomm-popular-items-by-views",
+ "tags": [{"tagKey": "hello13", "tagValue": "world13"}],
+ }
+
+
+@mock_sts
+def test_filter_args(tags_configuration_path, monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(tags_configuration_path)
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+
+ cfg._validate_filters()
+ assert len(cfg._configuration_errors) == 0
+
+ assert argtest.args[1] == {
+ "name": "clicked-or-streamed-2",
+ "filterExpression": 'INCLUDE ItemID WHERE Interactions.EVENT_TYPE in ("click", "stream")',
+ "tags": [{"tagKey": "tag11", "tagValue": "key11"}],
+ "datasetGroupArn": f"arn:aws:personalize:us-east-1:{ACCOUNT_ID}:dataset-group/validation",
+ }
+
+
+@mock_sts
+def test_event_tracker_args(tags_configuration_path, monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(tags_configuration_path)
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+
+ cfg._validate_event_tracker()
+ assert len(cfg._configuration_errors) == 0
+
+ assert argtest.args[1] == {
+ "name": "unit_test_new_event_tracker",
+ "tags": [{"tagKey": "tag10", "tagValue": "key10"}],
+ "datasetGroupArn": f"arn:aws:personalize:us-east-1:{ACCOUNT_ID}:dataset-group/validation",
+ }
+
+
+@mock_sts
+def test_event_tracker_args(tags_configuration_path, monkeypatch, argtest):
+ cfg = Configuration()
+ cfg.load(tags_configuration_path)
+ monkeypatch.setattr("aws_lambda.shared.personalize_service.Configuration._fill_default_vals", argtest)
+
+ cfg._validate_event_tracker()
+ assert len(cfg._configuration_errors) == 0
+
+ assert argtest.args[1] == {
+ "name": "unit_test_new_event_tracker",
+ "tags": [{"tagKey": "tag10", "tagValue": "key10"}],
+ "datasetGroupArn": f"arn:aws:personalize:us-east-1:{ACCOUNT_ID}:dataset-group/validation",
+ }
diff --git a/source/tests/aws_lambda/test_sfn_middleware.py b/source/tests/aws_lambda/test_sfn_middleware.py
index 51c8419..e474c3a 100644
--- a/source/tests/aws_lambda/test_sfn_middleware.py
+++ b/source/tests/aws_lambda/test_sfn_middleware.py
@@ -15,22 +15,21 @@
from decimal import Decimal
import pytest
-from moto import mock_sts
-
from aws_lambda.shared.sfn_middleware import (
- PersonalizeResource,
- STATUS_IN_PROGRESS,
STATUS_FAILED,
- ResourcePending,
+ STATUS_IN_PROGRESS,
+ Parameter,
+ PersonalizeResource,
ResourceFailed,
ResourceInvalid,
+ ResourcePending,
json_handler,
- set_defaults,
- set_bucket,
parse_datetime,
- Parameter,
+ set_bucket,
+ set_defaults,
set_workflow_config,
)
+from moto import mock_sts
from shared.resource import DatasetGroup
@@ -70,7 +69,7 @@ def test_personalize_resource_decorator(personalize_resource, personalize_stubbe
"""
The typical workflow is to describe, then create, then raise ResourcePending
"""
- dsg_name = "dsgName"
+ dsg_name = "mockDatasetGroup"
personalize_stubber.add_client_error("describe_dataset_group", "ResourceNotFoundException")
personalize_stubber.add_response(
"create_dataset_group",
@@ -204,6 +203,7 @@ def test_parameter_resolution(key, source, path, format_as, default, result):
def test_set_workflow_config():
result = set_workflow_config(
{
+ "tags": [{"tagKey": "tag1", "tagValue": "key1"}],
"datasetGroup": {
"serviceConfig": {"datasetGroup": "should-not-change"},
"workflowConfig": {"maxAge": "one day"},
@@ -212,6 +212,7 @@ def test_set_workflow_config():
"serviceConfig": {},
},
"datasets": {
+ "serviceConfig": {},
"users": {
"dataset": {"serviceConfig": {}},
"schema": {"serviceConfig": {}},
@@ -228,7 +229,14 @@ def test_set_workflow_config():
"filters": [{"serviceConfig": {}}],
"solutions": [
{
- "serviceConfig": {"datasetGroup": "should-not-change"},
+ "serviceConfig": {
+ "datasetGroup": "should-not-change",
+ "tags": [{"tagKey": "mockSolution", "tagValue": "solutionKey"}],
+ "solutionVersion": {
+ "name": "mockSolutionVersion",
+ "tags": [{"tagKey": "mockSolutionVersion", "tagValue": "solutionVersionKey"}],
+ },
+ },
"campaigns": [
{
"serviceConfig": {},
@@ -257,6 +265,13 @@ def test_set_workflow_config():
# keys under serviceConfig should not change
assert result.get("datasetGroup").get("serviceConfig").get("datasetGroup") == "should-not-change"
assert result.get("solutions")[0].get("serviceConfig").get("datasetGroup") == "should-not-change"
+ assert result.get("solutions")[0].get("serviceConfig").get("tags") == [
+ {"tagKey": "mockSolution", "tagValue": "solutionKey"}
+ ]
+ assert result.get("solutions")[0].get("serviceConfig").get("solutionVersion").get("tags") == [
+ {"tagKey": "mockSolutionVersion", "tagValue": "solutionVersionKey"}
+ ]
# overrides to the default must remain unchanged
assert result.get("solutions")[0]["campaigns"][0]["workflowConfig"]["maxAge"] == "should-not-change"
+ assert result.get("solutions")[0]["campaigns"][0]["workflowConfig"]["maxAge"] == "should-not-change"
diff --git a/source/tests/cdk_solution_helper/aws_lambda/python/fixtures/pyproject.toml b/source/tests/cdk_solution_helper/aws_lambda/python/fixtures/pyproject.toml
index 96b05d7..2c1c716 100644
--- a/source/tests/cdk_solution_helper/aws_lambda/python/fixtures/pyproject.toml
+++ b/source/tests/cdk_solution_helper/aws_lambda/python/fixtures/pyproject.toml
@@ -5,7 +5,7 @@ description = ""
authors = ["AWS Solutions Builders"]
[tool.poetry.dependencies]
-python = "^3.7"
+python = "^3.9"
minimal = {path = "package"}
[tool.poetry.dev-dependencies]
diff --git a/source/tests/cdk_solution_helper/aws_lambda/python/test_function.py b/source/tests/cdk_solution_helper/aws_lambda/python/test_function.py
index 88c8f94..098a86d 100644
--- a/source/tests/cdk_solution_helper/aws_lambda/python/test_function.py
+++ b/source/tests/cdk_solution_helper/aws_lambda/python/test_function.py
@@ -77,7 +77,7 @@ def test_function_has_default_role(function_synth):
func = function_stack["Resources"]["TestFunction"]
assert func["Type"] == "AWS::Lambda::Function"
assert func["Properties"]["Handler"] == PYTHON_FUNCTION_NAME.split(".")[0] + "." + PYTHON_FUNCTION_HANDLER_NAME
- assert func["Properties"]["Runtime"] == "python3.7"
+ assert func["Properties"]["Runtime"] == "python3.9"
role = function_stack["Resources"][func["Properties"]["Role"]["Fn::GetAtt"][0]]
assert role["Type"] == "AWS::IAM::Role"
diff --git a/source/tests/conftest.py b/source/tests/conftest.py
index 07c344d..f6d4cde 100644
--- a/source/tests/conftest.py
+++ b/source/tests/conftest.py
@@ -10,9 +10,9 @@
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for #
# the specific language governing permissions and limitations under the License. #
# ######################################################################################################################
+import json
import os
import sys
-import json
from pathlib import Path
from tempfile import TemporaryDirectory
from typing import Dict, Optional
@@ -21,18 +21,17 @@
import jsii
import pytest
from aws_cdk.aws_lambda import (
- FunctionProps,
Code,
- Runtime,
Function,
- LayerVersionProps,
+ FunctionProps,
LayerVersion,
+ LayerVersionProps,
+ Runtime,
)
+from aws_solutions.core import get_service_client
from botocore.stub import Stubber
from constructs import Construct
-from aws_solutions.core import get_service_client
-
shared_path = str(Path(__file__).parent.parent / "aws_lambda")
if shared_path not in sys.path:
sys.path.insert(0, shared_path)
@@ -99,7 +98,7 @@ def mock_lambda_init(
props = FunctionProps(
code=Code.from_inline("return"),
handler=handler,
- runtime=Runtime.PYTHON_3_7,
+ runtime=Runtime.PYTHON_3_9,
**kwargs,
)
jsii.create(Function, self, [scope, id, props])
@@ -110,7 +109,7 @@ def mock_layer_init(self, scope: Construct, id: str, *, code: Code, **kwargs) ->
# override the runtime list for now, as well, to match above
with TemporaryDirectory() as tmpdirname:
kwargs["code"] = Code.from_asset(path=tmpdirname)
- kwargs["compatible_runtimes"] = [Runtime.PYTHON_3_7]
+ kwargs["compatible_runtimes"] = [Runtime.PYTHON_3_9]
props = LayerVersionProps(**kwargs)
jsii.create(LayerVersion, self, [scope, id, props])
@@ -131,6 +130,25 @@ def configuration_path():
return Path(__file__).parent / "fixtures" / "config" / "sample_config.json"
+@pytest.fixture
+def tags_configuration_path():
+ return Path(__file__).parent / "fixtures" / "config" / "sample_config_wtags.json"
+
+
+@pytest.fixture
+def root_tags_configuration_path():
+ return Path(__file__).parent / "fixtures" / "config" / "sample_config_root_tags.json"
+
+
+@pytest.fixture
+def argtest():
+ class TestArgs(object):
+ def __call__(self, *args):
+ self.args = list(args)
+
+ return TestArgs()
+
+
class NotifierStub(Notifier):
def __init__(self):
self.creation_notifications = []
@@ -197,9 +215,12 @@ def _validate_handler_config(resource: str, config: Dict, status: Optional[str]
shape = resource[0].upper() + resource[1:]
request_shape = cli.meta.service_model.shape_for(f"Create{shape}Request")
- del request_shape.members["tags"]
- if "importMode" in request_shape.members:
- del request_shape.members["importMode"]
+
+ if "performAutoML" in request_shape.members:
+ del request_shape.members["performAutoML"]
+
+ if shape == "SolutionVersion":
+ del request_shape.members["name"]
response_shape = cli.meta.service_model.shape_for(f"Describe{shape}Response")
diff --git a/source/tests/fixtures/config/sample_config.json b/source/tests/fixtures/config/sample_config.json
index 626f0ee..5b9e4fd 100644
--- a/source/tests/fixtures/config/sample_config.json
+++ b/source/tests/fixtures/config/sample_config.json
@@ -80,6 +80,35 @@
}
}
}
+ },
+ "items": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "items-dataset"
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "items-schema",
+ "schema": {
+ "type": "record",
+ "name": "Items",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "ITEM_ID",
+ "type": "string"
+ },
+ {
+ "name": "GENRES",
+ "type": "string",
+ "categorical": true
+ }
+ ],
+ "version": "1.0"
+ }
+ }
+ }
}
},
"solutions": [
@@ -89,12 +118,12 @@
"recipeArn": "arn:aws:personalize:::recipe/aws-item-affinity"
},
"batchSegmentJobs": [
- {
- "serviceConfig": {},
- "workflowConfig": {
- "schedule": "cron(0 3 * * ? *)"
- }
+ {
+ "serviceConfig": {},
+ "workflowConfig": {
+ "schedule": "cron(0 3 * * ? *)"
}
+ }
]
},
{
@@ -106,7 +135,7 @@
{
"serviceConfig": {},
"workflowConfig": {
- "schedule": "cron(0 3 * * ? *)"
+ "schedule": "cron(0 3 * * ? *)"
}
}
]
diff --git a/source/tests/fixtures/config/sample_config_root_tags.json b/source/tests/fixtures/config/sample_config_root_tags.json
new file mode 100644
index 0000000..f1df692
--- /dev/null
+++ b/source/tests/fixtures/config/sample_config_root_tags.json
@@ -0,0 +1,149 @@
+{
+ "tags": [
+ {
+ "tagKey": "hello",
+ "tagValue": "world"
+ }
+ ],
+ "datasetGroup": {
+ "serviceConfig": {
+ "name": "testing-tags"
+ }
+ },
+ "datasets": {
+ "interactions": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "interactions-dataset"
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "interactions-schema",
+ "schema": {
+ "type": "record",
+ "name": "Interactions",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "USER_ID",
+ "type": "string"
+ },
+ {
+ "name": "ITEM_ID",
+ "type": "string"
+ },
+ {
+ "name": "EVENT_TYPE",
+ "type": "string"
+ }
+ ],
+ "version": "1.0"
+ }
+ }
+ }
+ },
+ "items": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "items-dataset"
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "items-schema",
+ "schema": {
+ "type": "record",
+ "name": "Items",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "ITEM_ID",
+ "type": "string"
+ },
+ {
+ "name": "GENRES",
+ "type": "string",
+ "categorical": true
+ }
+ ],
+ "version": "1.0"
+ }
+ }
+ }
+ },
+ "users": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "users-dataset"
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "users-schema",
+ "schema": {
+ "type": "record",
+ "name": "Users",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "USER_ID",
+ "type": "string"
+ },
+ {
+ "name": "GENDER",
+ "type": "string",
+ "categorical": true
+ }
+ ],
+ "version": "1.0"
+ }
+ }
+ }
+ }
+ },
+ "solutions": [
+ {
+ "serviceConfig": {
+ "name": "affinity_item"
+ },
+ "batchSegmentJobs": [
+ {
+ "serviceConfig": {}
+ }
+ ]
+ },
+ {
+ "serviceConfig": {
+ "name": "unit_test_personalized_ranking_new_2",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-user-personalization"
+ },
+ "campaigns": [
+ {
+ "serviceConfig": {
+ "name": "personalized_ranking_campaign",
+ "minProvisionedTPS": 1
+ }
+ }
+ ],
+ "batchInferenceJobs": [
+ {
+ "serviceConfig": {}
+ }
+ ]
+ }
+ ],
+ "eventTracker": {
+ "serviceConfig": {
+ "name": "unit_test_new_event_tracker"
+ }
+ },
+ "filters": [
+ {
+ "serviceConfig": {
+ "name": "clicked-or-streamed-2",
+ "filterExpression": "INCLUDE ItemID WHERE Interactions.EVENT_TYPE in ('click', 'stream')"
+ }
+ }
+ ]
+}
\ No newline at end of file
diff --git a/source/tests/fixtures/config/sample_config_wtags.json b/source/tests/fixtures/config/sample_config_wtags.json
new file mode 100644
index 0000000..c33d9a5
--- /dev/null
+++ b/source/tests/fixtures/config/sample_config_wtags.json
@@ -0,0 +1,324 @@
+{
+ "datasetGroup": {
+ "serviceConfig": {
+ "name": "unit_test_new_datasetgroup",
+ "tags": [
+ {
+ "tagKey": "tag0",
+ "tagValue": "key0"
+ }
+ ]
+ },
+ "workflowConfig": {
+ "schedules": {
+ "import": "cron(0 */6 * * ? *)"
+ }
+ }
+ },
+ "datasets": {
+ "serviceConfig": {
+ "importMode": "FULL",
+ "tags": [
+ {
+ "tagKey": "tag1",
+ "tagValue": "key1"
+ }
+ ]
+ },
+ "users": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "unit_test_only_users",
+ "tags": [
+ {
+ "tagKey": "tag2",
+ "tagValue": "key2"
+ }
+ ]
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "unit_test_only_users_schema",
+ "schema": {
+ "type": "record",
+ "name": "users",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "USER_ID",
+ "type": "string"
+ },
+ {
+ "name": "AGE",
+ "type": "int"
+ },
+ {
+ "name": "GENDER",
+ "type": "string",
+ "categorical": true
+ }
+ ]
+ }
+ }
+ }
+ },
+ "items": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "items-dataset"
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "items-schema",
+ "schema": {
+ "type": "record",
+ "name": "Items",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "ITEM_ID",
+ "type": "string"
+ },
+ {
+ "name": "GENRES",
+ "type": "string",
+ "categorical": true
+ }
+ ],
+ "version": "1.0"
+ }
+ }
+ }
+ },
+ "interactions": {
+ "dataset": {
+ "serviceConfig": {
+ "name": "unit_test_only_interactions",
+ "tags": [
+ {
+ "tagKey": "tag3",
+ "tagValue": "key3"
+ }
+ ]
+ }
+ },
+ "schema": {
+ "serviceConfig": {
+ "name": "unit_test_only_interactions_schema",
+ "schema": {
+ "type": "record",
+ "name": "interactions",
+ "namespace": "com.amazonaws.personalize.schema",
+ "fields": [
+ {
+ "name": "ITEM_ID",
+ "type": "string"
+ },
+ {
+ "name": "USER_ID",
+ "type": "string"
+ },
+ {
+ "name": "TIMESTAMP",
+ "type": "long"
+ },
+ {
+ "name": "EVENT_TYPE",
+ "type": "string"
+ },
+ {
+ "name": "EVENT_VALUE",
+ "type": "float"
+ }
+ ]
+ }
+ }
+ }
+ }
+ },
+ "solutions": [
+ {
+ "serviceConfig": {
+ "name": "affinity_item",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-item-affinity",
+ "tags": [
+ {
+ "tagKey": "tag4",
+ "tagValue": "key4"
+ }
+ ],
+ "solutionVersion": {
+ "tags": [
+ {
+ "tagKey": "tag5",
+ "tagValue": "key5"
+ }
+ ]
+ }
+ },
+ "batchSegmentJobs": [
+ {
+ "serviceConfig": {
+ "tags": [
+ {
+ "tagKey": "tag6",
+ "tagValue": "key6"
+ }
+ ]
+ },
+ "workflowConfig": {
+ "schedule": "cron(0 3 * * ? *)"
+ }
+ }
+ ]
+ },
+ {
+ "serviceConfig": {
+ "name": "affinity_item_attribute",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-item-attribute-affinity",
+ "tags": [
+ {
+ "tagKey": "tag7",
+ "tagValue": "key7"
+ }
+ ]
+ },
+ "batchSegmentJobs": [
+ {
+ "serviceConfig": {},
+ "workflowConfig": {
+ "schedule": "cron(0 3 * * ? *)"
+ }
+ }
+ ]
+ },
+ {
+ "serviceConfig": {
+ "name": "unit_test_sims_new",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-sims"
+ },
+ "workflowConfig": {
+ "schedules": {
+ "full": "cron(0 0 ? * 1 *)"
+ }
+ }
+ },
+ {
+ "serviceConfig": {
+ "name": "unit_test_popularity_count_new",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-popularity-count"
+ },
+ "workflowConfig": {
+ "schedules": {
+ "full": "cron(0 1 ? * 1 *)"
+ }
+ }
+ },
+ {
+ "serviceConfig": {
+ "name": "unit_test_personalized_ranking_new",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-user-personalization"
+ },
+ "workflowConfig": {
+ "schedules": {
+ "full": "cron(0 2 ? * 1 *)"
+ }
+ },
+ "campaigns": [
+ {
+ "serviceConfig": {
+ "name": "unit_test_personalized_ranking_new_campaign",
+ "minProvisionedTPS": 1,
+ "tags": [
+ {
+ "tagKey": "tag8",
+ "tagValue": "key8"
+ }
+ ]
+ }
+ }
+ ]
+ },
+ {
+ "serviceConfig": {
+ "name": "unit_test_personalized_ranking_new_2",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-user-personalization"
+ },
+ "workflowConfig": {
+ "schedules": {
+ "full": "cron(0 2 ? * 1 *)"
+ }
+ },
+ "campaigns": [
+ {
+ "serviceConfig": {
+ "name": "unit_test_personalized_ranking_2_campaign",
+ "minProvisionedTPS": 1
+ }
+ }
+ ],
+ "batchInferenceJobs": [
+ {
+ "serviceConfig": {
+ "tags": [
+ {
+ "tagKey": "tag9",
+ "tagValue": "key9"
+ }
+ ]
+ },
+ "workflowConfig": {
+ "schedule": "cron(0 3 * * ? *)"
+ }
+ }
+ ]
+ }
+ ],
+ "eventTracker": {
+ "serviceConfig": {
+ "name": "unit_test_new_event_tracker",
+ "tags": [
+ {
+ "tagKey": "tag10",
+ "tagValue": "key10"
+ }
+ ]
+ }
+ },
+ "filters": [
+ {
+ "serviceConfig": {
+ "name": "clicked-or-streamed-2",
+ "filterExpression": "INCLUDE ItemID WHERE Interactions.EVENT_TYPE in (\"click\", \"stream\")",
+ "tags": [
+ {
+ "tagKey": "tag11",
+ "tagValue": "key11"
+ }
+ ]
+ }
+ }
+ ],
+ "tags": [
+ {
+ "tagKey": "tag12",
+ "tagValue": "key12"
+ }
+ ],
+ "recommenders": [
+ {
+ "serviceConfig": {
+ "name": "ddsg-most-viewed",
+ "recipeArn": "arn:aws:personalize:::recipe/aws-ecomm-popular-items-by-views",
+ "tags": [
+ {
+ "tagKey": "hello13",
+ "tagValue": "world13"
+ }
+ ]
+ }
+ }
+ ]
+}
\ No newline at end of file