Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configuration of which upstream Registry to cache #9

Open
johnrengelman opened this issue Nov 13, 2013 · 3 comments
Open

Allow configuration of which upstream Registry to cache #9

johnrengelman opened this issue Nov 13, 2013 · 3 comments

Comments

@johnrengelman
Copy link

Currently, on the instance of a cache-miss, npm_lazy attempts to fetch from registry.npmjs.org. It would be nice if we could configure the GET to retrieve from a different upstream URL instead of the main site.

Perhaps even provide a list of registries to attempt to retrieve from with a sane default timeout on the request. So registry.npmjs.org would be attempt but if it doesn't respond fast enough, it can fall back to something like: http://npm.nodejs.org.au:5984/registry/_design/app/_rewrite

@mixu
Copy link
Owner

mixu commented Nov 19, 2013

@johnrengelman thanks for suggesting this, implemented the part where the registry url is configurable in v1.0.x.

It would be interesting to also have a list of registries to allow npm_lazy to fall back to a different registry backend. I'd need to figure out what the exact logic for that would be though: what is sufficient evidence to abandon one registry url and to try another one, given that I don't want to do that very lightly.

It might be just that if a resource does not exist locally (or is too old locally in the case of metadata) and it fails the 5x retries then try the secondary registry url.

The issue with that is though that I'd kind of prefer to accumulate evidence against a registry so that all requests would be automatically switched; if one package is down, then probably all others are too.

Here are the types of evidence supported now:

  • timeouts (right now 2 seconds)
  • invalid responses (e.g. index fails to parse as JSON, shasum check fails for a tarfile)
  • retries amount exceeded (e.g. after invalid/timed out responses)
  • errors (e.g. res.statusCode)

Need to think about this a bit more before doing the fallback thing.

@leftieFriele
Copy link
Contributor

I just wanted to comment on this issue as I had a similar request in mind at first, but have since changed my mind.
We have been using npm_lazy in front of the Kappa module which has support for multiple fallback repsotires. This allows us to have local repo setup first and then have the public repos being queried after that.
We think this works great as each module has one responsibility. Npm_lazy for caching in case of downtime and Kappa to provide for local repos.
My point is that I think the configuration option for a repo is sufficient and it helps keep npm_lazy simple.

@Burstaholic
Copy link

It should be fairly simple to do basic fallback like Kappa - in a private registry scenario, it is entirely expected for the registry to respond with 404, which doesn't mean it is down, just that this package is not a private one and needs to be fetched from the public registry.

It's much more useful for me to be able to transparently use a private registry than to have insurance against npmjs.org itself going down, which sounds quite a bit more complicated to implement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants