-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disruptor does not affect traffic sent via kubectl port-forward
#214
Comments
I think this should be done as a temporary measure, as the solution to the problem may take time to implement and test properly, to avoid users facing the same issue to waste time debugging the problem. |
I've not checked if this will actually solve the issue, but assuming it will, I don't see why this sould be a bad UX, as the IP can be detected by the agent and used in the command. |
I probably should have elaborated further. I see two main issues with this approach: The first one is that if add this way to change the target upstream (e.g. The second (and probably more important) problem is that we cannot disrupt the same address we use as upstream. So if we use the podIP as upstream, we will not be able to disrupt regular traffic, just localhost (port-forward) traffic. Having to specify this on the test does not feel like good UX to me, specially because it may fail if the deployment mode is changed later. |
Thinking more about this, it might be possible to enable this without UX compromises by complicating the setup and First, we add a new local-only IP to the pod, e.g. Then, we modify the
The effect of this would be that all traffic directed towards the upstream application will be redirected to the proxy, except the traffic from the proxy itself (as per |
I think that if we document the issue for pods exposed using port forward and we define a simple option that addresses this issue (similar to the idea of the Also, I think we don't need to support disrupting traffic from the port forwarding and the regular traffic at the same time. This seems like a reasonable limitation to me. |
Do we need to create a new IP address? if the proxy always binds to the pod IP and the iptables rules exclude this address as a source IP, I think the result would be the same. The traffic coming to the Pod through the |
It sounds like this should be the case, but using an IP that already has a different purpose and is perceived to be external for this seems... strange. It might be confusing even for us why we decided to do this. An extra bonus of using a dedicated IP, we reduce chances of collision with applications. |
Which interface will have this IP address? |
My idea was to add it to |
The current configuration for the disruptor will cause it to not disrupt traffic sent to the pods by means of
kubectl port-forward
, no matter if the target of theport-forward
is a pod or a service.The reason for this is that the disruption is limited to traffic flowing throught the
eth0
interface, as restricted by theiptables
command here:xk6-disruptor/pkg/iptables/iptables.go
Line 13 in f405082
And defaulted here:
xk6-disruptor/cmd/agent/commands/http.go
Line 75 in c9403f9
This is because when using
kubectl port-forward
, traffic is forwarded through thelo
interface. This can be checked by runningkubectl port-forward
and thentcpdump
on the pod, observing that traffic only shows up when capturing inlo
(-i lo
):The solution for this is non-trivial.
❌ Workaround attempt with
iface: "lo"
A tempting way to workaround this would be to specify
iface: "lo"
in the disruptor configuration. This, unfortunately, will not work, as the disruptor proxy itself useslocalhost
(thuslo
) as upstream:xk6-disruptor/cmd/agent/commands/http.go
Line 21 in c9403f9
If attempted, the generated
REJECT
rule:xk6-disruptor/pkg/iptables/iptables.go
Line 15 in c9403f9
shuts down this otherwise infinite connection loop:
The test, however, will appear to work with neither disruptions nor resets. This is because
kubectl port-forward
will forward the target port as both v4 and v6 address. k6 will see a connection reset while attempting to connect to127.0.0.1:3333
, as per theREJECT
rule above. When this happens, k6 will automatically fall back to[::1]:3333
, which is exempt from the disruption, and thus work normally without neither disruption nor reset.Moving forward
There are several paths forward, none of them trivial:
port-forward
out of the scope of the problem.localhost
, as upstream, so theiface: "lo"
workaround works.a. This will have bad UX
The text was updated successfully, but these errors were encountered: