Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build/common: Add remote_fetch curl wrapper #167

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

morucci
Copy link
Contributor

@morucci morucci commented Dec 15, 2014

This patch purpose remote_fetch in order to make curl/wget usage
consistent between calls. The main point is to use the curl retry
mechanism to bypass temporary remote failures.

This can be a start point to make our build process resilient to upstream failure ...
Let us know your thoughts about that. We start to use it to build SF images.

This patch purpose remote_fetch in order to make curl/wget usage
consistent between calls. The main point is to use the curl retry
mechanism to bypass temporary remote failures.
@ErwanAliasr1
Copy link
Contributor

On what calls do you aim to use it ? all of them ?

@morucci
Copy link
Contributor Author

morucci commented Dec 15, 2014

Basically to replace each direct call to wget or curl (at least in *.install files). This is to solve those issues:

  • avoid build fail if an upstream point return like a 5** error, by using the retry feature of curl. This aims to stabilise image building.
  • Avoid unconsistency between component fetching method (eg. sometime wget, sometime curl, sometime option X, sometime option XY ...)

Have a look here http://softwarefactory.enovance.com/_r/%7C/c/428/

@morucci
Copy link
Contributor Author

morucci commented Dec 15, 2014

For -m, --max-time I'm not sure about this one as it seems to be related to the maximum amount of time to perform an operation. So I wont be able to tell you what to specify as . A fetch can take long if the network is slow but if the fetch is working the this not really a problem. So for me the "-m" option does not solve the problem.

For the retry-max-time option why not. If we use something like 120 to this option we won't loop trying to fetch the resource after 120 s. This is the same as I purpose expect that with --retry-max-time we use the "exponential backoff algorithm" instead of fix 10 second 12 times.

@ErwanAliasr1
Copy link
Contributor

I like the idea, please make a PR with the full change.

@fredericlepied
Copy link
Contributor

@morucci still working on this ?

@ErwanAliasr1
Copy link
Contributor

@morucci hey, still interested in this PR ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants