You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using the JamfPackageCleaner in our DEV environment works fine, however in out Pre-Prod environment the processor responds with the following with -vvvv logging enabled and looking at the Jamf Upload logs:
"504 gateway timeout" from the API response for three attempts and then on the 4th attempt it responds with "The server has not found anything matching the request URI". This continues for the remaining attempts before generating a response code 404.
The traceback references a few areas of the processor in particular:
JamfPackageCleaner.py", line 305, in main
self.delete_package(
JamfPackageCleaner.py", line 161, in delete_package
raise ProcessorError("ERROR: Package deletion failed "
Upon completion of the recipe run it appears that the package has actually been deleted looking in Jamf. The actual output from autopkgr displayed in the terminal is as follows:
If there was a way for the timeout to be managed better it would be helpful as at the moment it stops after attempting the first package deletion and doesn't attempt any other in scope that aren't defined in versions_to_keep variable. Therefore numerous runs would be required to complete this and scheduling something with launched seems a lot of work if this could be resolved.
The text was updated successfully, but these errors were encountered:
After monitoring the progress of the attempts while looking at Jamf Pro, it appears to delete the package after 3 or 4 attempts but the autopkg run output still proceeds to attempt until 5 attempts fail.
You mean you're using your own S3 bucket as cloud distribution point rather than Jamf's? That's not something I've been able to test, but I do remember anecdotally that one other user has "Sleep" processors in between other JamfUploader processors, to allow the S3 bucket to catch up (when adding packages, not when deleting). Possibly JamfPackageCleaner needs longer than other forms of DP to get useful feedback, or possibly it won't work at all because the S3 bucket is disconnected from the pkg metadata.
This also reminds me that JamfPackageCleaner is going to need to be updated for JCDS 2.0.
Using the JamfPackageCleaner in our DEV environment works fine, however in out Pre-Prod environment the processor responds with the following with -vvvv logging enabled and looking at the Jamf Upload logs:
"504 gateway timeout" from the API response for three attempts and then on the 4th attempt it responds with "The server has not found anything matching the request URI". This continues for the remaining attempts before generating a response code 404.
The traceback references a few areas of the processor in particular:
JamfPackageCleaner.py", line 305, in main
self.delete_package(
JamfPackageCleaner.py", line 161, in delete_package
raise ProcessorError("ERROR: Package deletion failed "
Upon completion of the recipe run it appears that the package has actually been deleted looking in Jamf. The actual output from autopkgr displayed in the terminal is as follows:
JamfPackageCleaner: Deleting package...
JamfPackageCleaner: Package delete attempt 1
JamfPackageCleaner: UNKNOWN ERROR: Package '3259' deletion failed. Will try again.
If there was a way for the timeout to be managed better it would be helpful as at the moment it stops after attempting the first package deletion and doesn't attempt any other in scope that aren't defined in versions_to_keep variable. Therefore numerous runs would be required to complete this and scheduling something with launched seems a lot of work if this could be resolved.
The text was updated successfully, but these errors were encountered: