Skip to content

[PRO] Using S3 for storing and replaying traffic

Leonid Bugaev edited this page Apr 29, 2017 · 11 revisions

By default GoReplay allows you to store recorded requests in files.

On big traffic amounts, file storage becomes a bottleneck, and it make sense to upload this recording to the cloud storage.

While it is quite "easy" to upload data to the cloud, it becomes non-trivial if you want to replay this data using GoReplay.

GoReplay PRO add support for replayed data directly from Amazon S3 storage and uploading recorded data to S3 as well.

For reading from S3 you should use --input-file s3://<bucket>/<path>, example: gor --input-file s3://logs/2016-05-* --output-http http://example.com.

For writing to S3 you should use --output-file s3://<bucket>/<path>, example: gor --input-raw :80 --output-file s3://logs/%Y-%m-%d.gz.

Both input and output will behave exactly the same, as it works with ordinary files, like file patterns and automatically creating chunks.

GoReplay takes AWS credentials from standard environment variables:

  • AWS_ACCESS_KEY_ID – AWS access key.
  • AWS_SECRET_ACCESS_KEY – AWS secret key. Access and secret key variables override credentials stored in credential and config files.
  • AWS_DEFAULT_REGION – AWS region. This variable overrides the default region of the in-use profile if set.
  • AWS_ENDPOINT_URL- Custom AWS S3 endpoint, if you use S3 over the proxy, AWS GOV, or for example S3 compatible storages like Minio