-
I was implementing this library in our project, which uses S3 for everything related to files, and, when I got to the point of having the file uploaded to the server and uploading it from the server to S3, I started thinking about how is an efficient way to do that, and replace my references to the local file with references to the S3 file. I thought I would investigate how complex it would be to modify this library to operate the chunks from S3. It appears that, to upload large files (>10GB) to S3, we have to stream them similarly to how this library does streaming, although the If you have the time and it's not a bother, would you take some time to explain to me why? I'm just trying to figure out for myself whether the tradeoff might be worth it, to perhaps make a fork and make the modification. Thank you for your consideration, and may God bless your day! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi @ChaDonSom , well the problem is, it would not be effective to do it if you do not need scaling... But I would probably use the official way how to upload file to S3 using chunks, something like this: https://aws.amazon.com/blogs/compute/uploading-large-objects-to-amazon-s3-using-multipart-upload-and-transfer-acceleration/ All the chunks would be uploaded to S3. Then to merge it we would have to download all the files and then merge it (this would still require storage on the disk). But, it could be interesting to add support to a next version of the package (I think the package needs some refactoring to be more stable and up to date to latest trends). I think i will probably need the package again this year, so I could invest the time and finally give it a nice update :) |
Beta Was this translation helpful? Give feedback.
Hi @ChaDonSom , well the problem is, it would not be effective to do it if you do not need scaling... But I would probably use the official way how to upload file to S3 using chunks, something like this: https://aws.amazon.com/blogs/compute/uploading-large-objects-to-amazon-s3-using-multipart-upload-and-transfer-acceleration/
All the chunks would be uploaded to S3. Then to merge it we would have to download all the files and then merge it (this would still require storage on the disk).
But, it could be interesting to add support to a next version of the package (I think the package needs some refactoring to be more stable and up to date to latest trends). I think i will probably need the …