You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@jarstelfox and I measured the exact timing of strapi deploy and service availability and found a surprise.
This is for a fresh deploy:
Strapi build starts: 12:34:56
Strapi build "completed": 12:36:08
Curl went from SSL failure to http 502: 12:36:36
Curl went from 502 -> 204 (strapi is available): 12:38:05
It's not a huge amount of time, but it's surprising that there was ~2 minutes between container is started and eventually being able to access it via http.
This likely has something to do with Caddy and adding new routes. Perhaps it sees the service as down right after being added and then it stays marked as down for another couple of minutes (slow healing) even though the service is up and responding.
Note: a re-deploy of the same branch looks much rosier (8s of downtime, totally fine):
Strapi build starts: 12:40:52
Strapi build "completed": 12:42:01
Curl went from 204 to http 502: 12:42:00
Curl went from 502 -> 204 (strapi is available): 12:42:08
The text was updated successfully, but these errors were encountered:
danielbeardsley
changed the title
Strapi deploy slowness: reduce
Strapi deploy slowness: Reduce 2 minute delay
Feb 15, 2024
Fixing this delay isn't critical. Now that we are waiting for strapi before CI, this is more of a curiosity than a real problem. There are plenty more places where your efforts will have a bigger impact. I mostly wrote this issue so we'd record what we found.
@jarstelfox and I measured the exact timing of strapi deploy and service availability and found a surprise.
This is for a fresh deploy:
It's not a huge amount of time, but it's surprising that there was ~2 minutes between container is started and eventually being able to access it via http.
This likely has something to do with Caddy and adding new routes. Perhaps it sees the service as down right after being added and then it stays marked as down for another couple of minutes (slow healing) even though the service is up and responding.
Note: a re-deploy of the same branch looks much rosier (8s of downtime, totally fine):
The text was updated successfully, but these errors were encountered: