Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add skip_resource_check option #191

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

mhluska
Copy link

@mhluska mhluska commented Dec 4, 2017

Issue #189

Allows skipping the internal HEAD request which may get rejected by some pre-signed URLs (e.g. Amazon S3).

This helps in environments like Heroku where initially downloading the file is not an option (large files contribute to process memory causing the dyno to get killed).

@mhluska
Copy link
Author

mhluska commented Dec 4, 2017

If you really need to preserve the file size data, we can do it by starting a GET request stream and terminating it early as soon as the Content-Length header is available.

@yoelblum
Copy link

I need to create a task that runs over thousands of amazon s3 videos and updates their metadata; obviously it's better not to download each video. Unfortunately the HEAD request fails for me, so maybe this is a good direction 👍

@yoelblum
Copy link

@mhluska are you sure the file size data is lost in a head request? i'm actually not quite sure how ffmpeg gets the metadata from a remote file

@mhluska
Copy link
Author

mhluska commented Apr 19, 2018

@yoelblum if I remember correctly, with this library you'll ultimately have to download the full file to do any processing on it. If all you need is the content-length header and modifying metadata in S3 then you probably don't need this library. You can probably just get by with the AWS ruby SDK: https://github.com/aws/aws-sdk-ruby

@yoelblum
Copy link

@mhluska It works fine without downloading the file. The metadata is returning correctly - including duration and file size. I have no idea how it works but it works! the only issue I had was indeed the HEAD request for amazon s3.

@mhluska
Copy link
Author

mhluska commented Apr 19, 2018

Huh, interesting. Well glad you got it working.

@gonzaloaune
Copy link

Having the exact same problem, if the url is a presigned S3 one, the HEAD will fail because the presigned url's from S3 are only for GET request, if you want one that supports HEAD, the url signature will be different.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants