-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Six bucket upload #79
base: dev
Are you sure you want to change the base?
Conversation
the magic for this function would be:
let's say I pass in this vector into the function: c(
"/Users/sean/science/microscope/img1.png",
"/Users/sean/science/microscope/img2.png",
"/Users/sean/science/experiment1/",
) and let's say this is the structure of
the following "keys" (to use AWS parlance) would be created in the bucket:
It would be good to be able to change the level too, so you didn't always have to use the top level, but I think you've built that functionality already here. If this does what I've described let me know and I will take it for a spin. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above.
that's not the current behavior right now wrt dir's. right now all the files in the dir get put into the root bucket path. i'll work on that and ping you when it's done |
@sckott do you understand what I'm saying here? |
By level I assume you mean how far down do you walk through the local directory? If that's what you're referring to, probably the thing people are most familiar with is just recursive or not, yeah? That is, just the top level, which in this example would be just the readme.md file, or recursive, which would be the example you gave where you go recursively down through dirs until there's no more dirs to go into |
nah lemme clarify, I'm thinking of functionality like: six_bucket_upload(
c(
"/Users/sean/science/microscope/img1.png",
"/Users/sean/science/microscope/img2.png",
"/Users/sean/science/experiment1/",
),
destination_path = "nih_grant/aim1" # probably a bad choice for an arg name
) which would create these keys: /nih_grant/aim1/img1.png
/nih_grant/aim1/img2.png
/nih_grant/aim1/experiment1/README.md
/nih_grant/aim1/experiment1/data/results.csv
/nih_grant/aim1/experiment1/code/analysis.py But if you don't specify destination_path it just defaults to use the top level of the bucket. |
Okay, so I think what i'm hearing with respect to dirs is:
Let's assume for simplicity that the arg name is |
Sounds great! Thank you. And yes I agree. |
if (length(remote_parts) > 1) { | ||
key_prefix <- path_join(remote_parts[-1]) | ||
cli_info("using key prefix {.strong {key_prefix}}") | ||
path$key <- path(key_prefix, path$key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need an @importFrom fs path
wherever you think is appropriate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks and works great except for the one comment above.
fixes #67
@seankross please review, thanks.
Note that
six_file_upload
function mentioned in the related issue is already in the package on main branchI'm still not sure about what the best behavior should be with respect to directories of files locally. Should a dir of files be uploaded into a dir of the same name, or just exploded into the bucket name? When I used the
s3fs
fxn for uploading to a bucket it exploded the files in a local folder into the aws bucket at the root rather than into a folder in the bucket. Perhaps only if the user specifies a path in theremote
arg in addition to the bucket name, e.g.,bucketasdfasdfasdf/somedir
, thensomedir
would be the dir name - BUT that somedir could also just be the file name, so this seems precarious.We could just wrap the
s3fs
function, but since it's been a while since we were focusing on this package, I know I tried to move away froms3fs
functions but I'm not remembering why now.