Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local env does not result in S3 images being uploaded #97

Closed
justenh opened this issue Apr 14, 2021 · 11 comments
Closed

Local env does not result in S3 images being uploaded #97

justenh opened this issue Apr 14, 2021 · 11 comments

Comments

@justenh
Copy link

justenh commented Apr 14, 2021

Hi,

I'm having trouble with our storages config.

I've seen in the FAQs that Imager will always store local copies of the files, but I am not seeing our files ever make their way to S3. I am able to confirm that our optimized files (using TinyPNG API) are being stored locally, but they are not being uploaded to s3.

Also, I can confirm that the S3 credentials are accurate, because they are tied to a volume that successfully uploads to the S3 bucket.

Additionally, I don't see anything in the Debug Bar when I search the logs for "Imager" or "Error".

Our config file looks like this... (Note, we upgraded from Imager 2, to Imager-X Pro)

config/imager.php

<?php

return [

    'imagerUrl' => 'http://xxxx.cloudfront.net/transforms',

    'optimizers' => ['tinypng'],

    'storages' => ['aws'],

    'optimizerConfig' => [
        'tinypng' => [
            'extensions' => ['png','jpg'],
            'apiKey' => 'xxxx',
        ],
    ],

    'storageConfig' => [
	    'aws' => [
	        'accessKey' => 'xxxx',
	        'secretAccessKey' => 'xxxx',
	        'region' => 'us-east-2',
	        'bucket' => 'xxxx',
	        'folder' => '/transforms',
	        'requestHeaders' => array(),
	        'storageType' => 'standard',
	        'cloudfrontInvalidateEnabled' => false,
	        'cloudfrontDistributionId' => '',
	    ],
	]
];

and our package versions are below.

composer.json 

 "require": {
    "craftcms/cms": "3.3.20.1",
    "vlucas/phpdotenv": "^2.4.0",
    "craftcms/redactor": "2.4.0",
    "verbb/expanded-singles": "^1.0",
    "sebastianlenz/linkfield": "1.0.23",
    "verbb/field-manager": "^2.0",
    "verbb/super-table": "^2.0",
    "craftcms/element-api": "^2.5",
    "luwes/craft3-codemirror": "^1.0",
    "barrelstrength/sprout-fields": "^3.2",
    "nystudio107/craft-seomatic": "3.3.23",
    "nystudio107/craft-retour": "3.1.52",
    "craftcms/feed-me": "^4.1",
    "verbb/knock-knock": "^1.1",
    "presseddigital/colorit": "1.0.9.3",
    "craftcms/aws-s3": "1.2.11",
    "spacecatninja/imager-x": "v3.4.0"
  },

Thank you for any help you can provide

@aelvan
Copy link
Contributor

aelvan commented Apr 18, 2021

Hi,

...but I am not seeing our files ever make their way to S3

Have you confirmed this by checking the contents of your bucket in the AWS S3 console? I see that you have not configured cloudfrontInvalidateEnabled and cloudfrontDistributionId which means CloudFront won't be purged automatically, so if you only check the file that's returned to the browser, that could be a cached version.

Also, doesn't any transformed files make its way to S3, or only the optimized ones?

Are you 100% sure that us-east-2 is the correct region? If I recall correctly, this is not something one would configure for the Craft volume, it's inferred automatically there. And there's often some confusion regarding this, since the bucket isn't shown in the console, and there's a region in the url for the console that isn't the same as the bucket one. Check out this comment and the following couple of comments, which resolved an identical error report.

@justenh
Copy link
Author

justenh commented Apr 18, 2021

Hi @aelvan

Have you confirmed this by checking the contents of your bucket in the AWS S3 console?

Pulling up the bucket in S3 console and navigating to the "transforms" folder shows "You don't have any objects in this folder."

Also, doesn't any transformed files make its way to S3, or only the optimized ones?

At the moment we're only utilizing optimized images (no transforms), but there are no files being uploaded to S3

Are you 100% sure that us-east-2 is the correct region?

Yep! Had to triple check after stumbling across those posts earlier! Our region is US East (Ohio) us-east-2 in S3 console.

Also, I can confirm that uploading via the volume is working perfectly and that the images are being optimized and stored locally within a folder called imager (maybe that indicates an issue?)

Really appreciate your help help and time on this. Imager has been a great tool for us

@aelvan
Copy link
Contributor

aelvan commented Apr 19, 2021

Have you checked that the transforms doesn't end up in some other location on the bucket? Like, in /transformsmypath/to/images or something. Shouldn't be the case, but.. bugs.

What local environment are you on?

And this is happening for newly created transforms, ie they don't already exist in the imagerSystemPath (defaults to @webroot/imager)? If they already exist there, Imager will assume that they already have been uploaded to S3 and don't try to re-upload. Which may be the case if you previously had the transformed images locally, but then decided to move them to S3.

Did you previously have S3 working in Imager 2.0 before upgrading?

@justenh
Copy link
Author

justenh commented Apr 20, 2021

Have you checked that the transforms doesn't end up in some other location on the bucket?

I checked the bucket and can confirm no images get uploaded to my configured path or any other path within the bucket.

What local environment are you on?

I'm running the project using Laravel Valet. All other aspects of the project seem to be working well, DB, PHP, etc.. including the optimization via TinyPNG. Let me know if there are more specifics I can provide here.

And this is happening for newly created transforms, ie they don't already exist in the imagerSystemPath

That's correct. I've been runningrm -rf web/imager before each go, and the imager folder keeps coming back, but no files get pushed to S3.

Did you previously have S3 working in Imager 2.0 before upgrading?

We didn't! In fact, I was convinced that's why S3 wasn't working, so we upgraded to see if it was something else.

I know this is probably a weird thing to debug, but thank you very much for your help here! Let me know if there's any other info I can provide, or ways in which i can test things out. Thanks again

@aelvan
Copy link
Contributor

aelvan commented Apr 22, 2021

Hmm, if it didn't work for Imager 2.0 either, it sounds like it's something with the credentials. Have you tried, just for kicks, to create a new set of credentials and a new bucket, and test with that? It's weird that you don't get any errors though.

Are you running TinyPNG runtime or as a queue? If it is as a queue, have you checked the queue.log files for errors? Shouldn't really matter because the file is uploaded both before and after optimizing, but.. Have you tried disabling TinyPNG completely?

You have upgraded to the Pro edition (external storages is a pro feature, but you should get a warning about that too in the debug toolbar)?

@aelvan
Copy link
Contributor

aelvan commented Apr 28, 2021

@justenh Did you figure this out?

@justenh
Copy link
Author

justenh commented Apr 30, 2021

@aelvan Not yet. Sorry! I had to switch to another project this week, but plan to give this another go next week. Thanks for checking in!

@davidhellmann
Copy link

@aelvan have also a question related to this. Is it not possible to have the images and transforms just on the AWS and not also local? Cause so I need the double size of space (AWS and @webroot/assets/transforms).

@aelvan
Copy link
Contributor

aelvan commented Apr 14, 2022

@davidhellmann No, the local files act as a cache and needs to be on the local file system. If Imager had to check S3 to see if a transform has been created or not, it would kill performance because of the latency.

I've been contemplating adding a separate caching layer, as outlined here, both to deal with the inconvenience of disk usage, and the fact that in serverless setups there might not be a local disk available. Since making that comment, I've decided that 4.0 will mostly be a pure Craft 4.0 conversion to minimize the effort/risk of upgrading, but it's still on the list for future releases.

@davidhellmann
Copy link

HM ok. We try to switch with a project from ImageOptimize to ImagerX cause IO adds a lot of Database overhead especially on a multisite environment.
But OK. Have to check how our complete setup is configured if its possible to save the files on the local folder.

Thanks!

@aelvan
Copy link
Contributor

aelvan commented Apr 14, 2022

Yeah, I guess you have to choose between DB overhead and disk overhead then. ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants