Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: add upload existing cids on s3 script #8

Merged
merged 4 commits into from
Jan 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,8 @@ terraform/*.json
*.out
tfplan

.env
.env*
!.env.example
build
.compose
.turbo
Expand Down
31 changes: 31 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,6 +238,37 @@ script execute:
pnpm upload-folder
```

### Uploading existing CIDs to S3

The script under `/scripts/upload-existing-cids-on-s3.ts` takes care of
detecting all the CIDs used across the Carrot protocol and uploading them to the
S3 limbo/persistent storage. It works with underlying data encoded in both raw
JSON and MerkleDAG PB `multicodec` formats with the option of adding more
supported codecs if needed, and it also works across chains. It requires a few
env variables in order to be ran:

- `S3_BUCKET`: the name of the S3 bucket to which to upload the data.
- `S3_REGION`: the region in which the S3 bucket is available.
- `S3_ENDPOINT (optional)`: the endpoint of the S3 bucket to which to upload the
data.
- `S3_ACCESS_KEY_ID`: an AWS access key id with the correct permission policies
attached to it to upload data to the target bucket.
- `S3_SECRET_ACCESS_KEY`: the secret key associated with the specified access
key id.

You need to put these envs in a separate `.env.upload-existing-cids-on-s3` file
and you can also test the script against a local S3 deployment in the same way
it's possible to do it with the development server (by bootstrapping MinIO
through the provided Docker Compose file and putting the right values in the
`.env.upload-existing-cids-on-s3` file).

To launch the script just execute the following command in your terminal from
the root of the repo:

```
pnpm upload-existing-cids-on-s3
```

## OpenAPI

The OpenAPI specification is exposed under `/swagger.json`, while the Swagger UI
Expand Down
6 changes: 6 additions & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,22 +13,28 @@
"format": "eslint --fix --ext .js ./ && prettier --write './**/*.{json,md}'",
"generate-jwt": "tsx scripts/generate-jwt.ts",
"upload-folder": "tsx scripts/upload-folder.ts",
"upload-existing-cids-on-s3": "tsx scripts/upload-existing-cids-on-s3.ts",
"build": "esbuild ./src/index.ts --platform=node --format=esm --define:process.env.NODE_ENV=\\\"production\\\" --bundle --packages=external --minify --outdir=./out --out-extension:.js=.mjs",
"dev": "nodemon --exec tsx src/index.ts"
},
"devDependencies": {
"@carrot-kpi/sdk": "^1.49.0",
"@commitlint/cli": "^18.4.4",
"@commitlint/config-conventional": "^18.4.4",
"@smithy/types": "^2.9.1",
"@types/jsonwebtoken": "^9.0.5",
"@types/pg": "^8.10.9",
"blockstore-core": "^4.3.10",
"dotenv": "^16.3.1",
"esbuild": "^0.19.11",
"eslint": "^8.56.0",
"eslint-config-custom": "*",
"eslint-config-prettier": "^9.1.0",
"eslint-plugin-prettier": "^5.1.3",
"graphql-request": "^6.1.0",
"husky": "^8.0.3",
"ipfs-unixfs-exporter": "^13.4.0",
"mime": "^4.0.1",
"nodemon": "^3.0.3",
"pino": "^8.17.2",
"prettier": "^3.2.4",
Expand Down
Loading
Loading