-
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
if you have a lot of data you can use "small" container for backup, you will need to tune settings |
Beta Was this translation helpful? Give feedback.
-
for s3 it means buffer size by default used for uploading/downloading minimum 2mb, max 5gb https://clickhouse.com/docs/en/operations/system-tables/parts
No, files inside part will read with streaming (some buffers for file IO linux page cache also should used) and compress (if different SDK use different buffers, look to https://github.com/Altinity/clickhouse-backup#explain-config-parameters, look
this is not preciese but you can't predict bytes_on_disk you run clickhouse-backup container in the same host with clickhouse-server there is backup memory usage graph for our typical 500Gb database last 7 days 500Gb database upload_concurrency = 4
this is maximum which never achieved |
Beta Was this translation helpful? Give feedback.
CPU/IO usage, it already have lowest nice and cpu nice priority, look https://github.com/Altinity/clickhouse-backup/?tab=readme-ov-file#explain-config-parameters cpu_nice_priority and io_nice_priority, memory usage could be control by
upload_concurrency/download_concurrency
settings in general config section and someconcurrency
settings specific for some remote storage typesyes, max size of data part is related look to
SELECT max(bytes_on_disk)/4000 FROM system.parts WHERE database!='system'
if you have a lot of data you can use "small" container for backup, you will need to tune settings