You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're currently in need of a common failover mechanism, as well as of a common default behavior on managing the bootstrap list.
A few issues could be solved by this:
1
In order to enforce the locality of our components (ie: S3 connect to the BucketD running in the same host), we are currently forced to set only one item in the bootstraplist of the associated component's client lib, because it is shuffled and does not track who's the closest component.
2
When installing locally, the Federation config templates are setting by default the bucketclient's bootstraplist as the host itself, without any care for the port (specific feature for local install). This means that either configuring Metadata not to use the default port or killing the one metadata instance that owns the default port renders the whole Metadata cluster useless (since every S3 is trying to connect to the port 9000). Configuring the full bootstrap with ports would help solving this by giving access to secondary servers.
@scality/team-ironman-core Discussion of this issue is really needed/welcome !
The text was updated successfully, but these errors were encountered:
putting s3 and bucketd on different hosts may result in increasing latency
First, the point is to have a behavior that, by default, privileges the one on the same host (in the best case nothing changes)
Second, Sure, going from S3 D to Bucketd A only to go to Repd C includes one more useless hop, so it might not be the most interesting part of this proposal.
Now, I do not have any knowledge of any official decision about either forcing everything-on-localhost vs do-whatever-you-want; in terms of flexibility for the installation. This piece of code would enable any of those two decisions; without judgement on this.
putting s3 and sproxyd on different hosts may result in increasing the network bandwdith
Well, I do not know if there's much difference between having a local sproxyd that sends to a remote RING, versus having a remote sproxyd that sends to the RING on its own host. I feel like there should not be much differences in terms of performance; but you can prove me wrong.
Also, I'm really scared about having to manage the sproxyd configuration from federation. I do not think we should go that way.
the failover/retry needs to be pretty reliable
Sure, I agree. And that's why it should only be written once and properly tested, than written multiple times and badly tested. (and I know it won't convince you).
load balancing wil come in the picture
That's the part I have no answer for.
But you know.... This issue has been up for a while, including for discussions, and no one discussed it...........
We need to have a real decision on the deployment design and flexibility. I'll be summoning @GiorgioRegni and @vrancurel for this :)
The only potentially hard-to-discuss point is the sproxyd deployment (right version) and its configuration management. Some work is currently ongoing on our side to try and improve this.
Load balancing is the most important worry, that shall enter in the equation for the decision related to 1.
We're currently in need of a common failover mechanism, as well as of a common default behavior on managing the bootstrap list.
A few issues could be solved by this:
1
In order to enforce the locality of our components (ie: S3 connect to the BucketD running in the same host), we are currently forced to set only one item in the bootstraplist of the associated component's client lib, because it is shuffled and does not track who's the closest component.
2
When installing locally, the Federation config templates are setting by default the bucketclient's bootstraplist as the host itself, without any care for the port (specific feature for local install). This means that either configuring Metadata not to use the default port or killing the one metadata instance that owns the default port renders the whole Metadata cluster useless (since every S3 is trying to connect to the port 9000). Configuring the full bootstrap with ports would help solving this by giving access to secondary servers.
@scality/team-ironman-core Discussion of this issue is really needed/welcome !
The text was updated successfully, but these errors were encountered: