Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] UDP requests mapping from tun0 to eth0 #262

Open
2 tasks done
engageub opened this issue Jun 5, 2023 · 25 comments
Open
2 tasks done

[Bug] UDP requests mapping from tun0 to eth0 #262

engageub opened this issue Jun 5, 2023 · 25 comments
Assignees
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@engageub
Copy link

engageub commented Jun 5, 2023

Verify steps

  • Is this something you can debug and fix? Send a pull request! Bug fixes and documentation fixes are welcome.
  • I have searched on the issue tracker for a related issue.

Version

Docker latest version

What OS are you seeing the problem on?

Linux

Description

A script has been created which uses tun2socks containers to redirect the proxy requests Internet Income.
The DNS requests are bypassed through port 53 of UDP for the applications to work. Most of the applications require only UDP for DNS queries but Mysterium unlike others requires UDP to establish connections and send data for monitoring.
The problem here is that the application uses the same network interfaces are tun2socks container . So the default interface is eth0 and the IP address is assigned from that interface to Mysterium application.
TCP requests work fine. But UDP requests are not being validated correctly by the application.
A Dante socks5 server was created which serves with both UDP and TCP. Running the commands through terminal works fine with UDP but not through mysterium application. It seems to be validating the destination IP address or the protocol itself.
There are no errors or warnings in Tun2socks container for UDP since the requests are successfully going to the remote server.
To understand further ubuntu container was created over tun2socks network. tcpdump and tshark were installed to capture and analyse the packets going through this container.

This command was run and traffic was captured through tun0 and eth0 interfaces.

nslookup google.com 1.1.1.1

With proxy UDP eth0 interface

 1   0.000000  172.17.0.13 ? 194.113.194.54 UDP 80 38556 ? 1199 Len=38
 2   0.381092 194.113.194.54 ? 172.17.0.13  UDP 96 1199 ? 38556 Len=54
 3   2.291440  172.17.0.13 ? 194.113.194.54 UDP 80 37672 ? 28168 Len=38
 4   2.605092 194.113.194.54 ? 172.17.0.13  UDP 108 28168 ? 37672 Len=66

With proxy UDP tun0 interface

    1   0.000000   198.18.0.1 ? 1.1.1.1      DNS 56 Standard query 0x0d43 A google.com
    2   2.201513      1.1.1.1 ? 198.18.0.1   DNS 72 Standard query response 0x0d43 A google.com A 142.251.32.110
    3   2.205195   198.18.0.1 ? 1.1.1.1      DNS 56 Standard query 0x4530 AAAA google.com
    4   4.402915      1.1.1.1 ? 198.18.0.1   DNS 84 Standard query response 0x4530 AAAA google.com AAAA 2607:f8b0:4006:81d::200e

Without proxy eth0 interface (bypassed port 53)

   1   0.000000  172.17.0.13 ? 1.1.1.1      DNS 70 Standard query 0x06b3 A google.com
    2   0.001929      1.1.1.1 ? 172.17.0.13  DNS 86 Standard query response 0x06b3 A google.com A 142.250.67.78
    3   0.002173  172.17.0.13 ? 1.1.1.1      DNS 70 Standard query 0x365c AAAA google.com
    4   0.004239      1.1.1.1 ? 172.17.0.13  DNS 98 Standard query response 0x365c AAAA google.com AAAA 2404:6800:4007:810::200e

Without proxy tun0 interface
No packets were present since the firewall was bypassed in this case.

With respect to the above outputs the DNS queries were successful in Mysterium application only when firewall is bypassed for port 53. So, it seems to be validating the destination IP address or protocol or something else.
If you look at the destination address for eth0 with and without proxy they are different.
The first one which uses proxy to redirect is using the IP address of the socks5 proxy whereas the bypassed port shows 1.1.1.1
Usually if the default network is eth0, the request first goes to this interface before going to the other interfaces outside this.
If a connection has to be done through socks5 proxy with a request it should be like this
Source Address-->Destination Address-->Socks5 Proxy-->Internet

But the eth0 interface which is default interface shows the following
Source Address-->Socks5Proxy
Socks5Proxy-->Source Address

In addition to the above the requests are being shown as UDP instead of DNS for eth0 interface through proxy.

The tun0 interface shows the destination IP address for 1.1.1.1 which is similar to the eth0 without proxy.

Is it possible to change the default interface to tun0 interface since it shows the correct destination similar to eth0 without proxy or is it possible to clone an interface to achieve this.
The requests are being invalidated when using UDP over eth0 interface by the application which the application shows as timeout since it did not receive the packet it was expecting.

CLI or Config

No response

Logs

No response

How to Reproduce

  1. Visit https://github.com/engageub/InternetIncome
  2. Download the script and install as mentioned
  3. Create Dante Socks5 server or any other server which supports UDP
  4. Enable logs in the properties.conf file to see the errors in the application logs
  5. Note that port 53 is bypassed by default in the code.
  6. You may comment and uncomment to see the difference why the application fails to connect over DNS queries over proxy.
@xjasonlyu
Copy link
Owner

Where did you deploy your dante server? If your dante server is not in the same LAN with tun2socks, it usually causes issue like this.

@engageub
Copy link
Author

engageub commented Jun 5, 2023

The dante server was deployed on an external VPS. It was working with linux terminal commands which use UDP but not with mysterium application.
Thank you

@engageub
Copy link
Author

engageub commented Jun 5, 2023

Please find the documentation followed to implement this.

https://www.digitalocean.com/community/tutorials/how-to-set-up-dante-proxy-on-ubuntu-20-04

The following reference was used to add extra configs though not required for now to test DNS: https://stackoverflow.com/questions/49855516/telegram-calls-via-dante-socks5-proxy-server-not-working

Thank you

@engageub
Copy link
Author

engageub commented Jun 6, 2023

Port 53 was bypassed to start the application. On looking further, pinger.go code in logs shows packet length as 32 and some garbage messages were displayed in the logs, where as the direct connection shows packet length as 44.
The code expects the response to be "OK" but instead is receiving some garbage messages.

Please find the difference in the logs.

Success Logs with direct connection:

2023-06-06T21:42:54.881 DBG ../../nat/traversal/pinger.go:460 > Local socket: 0.0.0.0:38576
2023-06-06T21:42:54.882 DBG ../../nat/traversal/pinger.go:460 > Local socket: 0.0.0.0:31326
2023-06-06T21:42:54.882 DBG ../../nat/traversal/pinger.go:460 > Local socket: 0.0.0.0:21937
2023-06-06T21:42:54.882 DBG ../../nat/traversal/pinger.go:460 > Local socket: 0.0.0.0:13855
2023-06-06T21:42:54.882 DBG ../../nat/traversal/pinger.go:460 > Local socket: 0.0.0.0:41718
2023-06-06T21:42:54.882 DBG ../../nat/traversal/pinger.go:460 > Local socket: 0.0.0.0:22617
2023-06-06T21:42:54.882 DBG ../../nat/traversal/pinger.go:460 > Local socket: 0.0.0.0:10432
2023-06-06T21:42:54.882 DBG ../../nat/traversal/pinger.go:460 > Local socket: 0.0.0.0:41842
2023-06-06T21:42:55.507 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 44
2023-06-06T21:42:55.507 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 44
2023-06-06T21:42:55.507 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 44
2023-06-06T21:42:55.507 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 44
2023-06-06T21:42:55.508 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 44
2023-06-06T21:42:55.508 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 44
2023-06-06T21:42:55.508 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 44
2023-06-06T21:42:55.508 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 44

Failure logs when tun2socks with Socks5 server is used:

2023-06-05T09:14:33.987 DBG ../../nat/traversal/pinger.go:460 > Local socket: 0.0.0.0:56437
2023-06-05T09:14:33.989 DBG ../../nat/traversal/pinger.go:460 > Local socket: 0.0.0.0:35311
2023-06-05T09:14:33.919 DBG ../../nat/traversal/pinger.go:460 > Local socket: 0.0.0.0:49819
2023-06-05T09:14:38.098 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32
2023-06-05T09:14:38.099 DBG ../../nat/traversal/pinger.go:400 > Unexpected message:
!�Bt؃
"�4�6�pK�d�cft - attempting to continue
2023-06-05T09:14:38.490 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32
2023-06-05T09:14:38.581 DBG ../../nat/traversal/pinger.go:400 > Unexpected message:
!�B1���!�gB��:�cft - attempting to continue
2023-06-05T09:14:38.606 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32
2023-06-05T09:14:38.610 DBG ../../nat/traversal/pinger.go:400 > Unexpected message:
!�B
������!���cft - attempting to continue
2023-06-05T09:14:38.693 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32
2023-06-05T09:14:38.698 DBG ../../nat/traversal/pinger.go:400 > Unexpected message:
!�B�0앑�2۬�Gt�n�cft - attempting to continue
2023-06-05T09:14:38.785 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32
2023-06-05T09:14:38.786 DBG ../../nat/traversal/pinger.go:400 > Unexpected message:
!�B�[l���\���q��cft - attempting to continue 2023-06-05T09:14:41.481 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32 2023-06-05T09:14:41.481 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32 2023-06-05T09:14:41.482 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32 2023-06-05T09:14:41.482 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32 2023-06-05T09:14:41.483 DBG ../../nat/traversal/pinger.go:400 > Unexpected message: !�B� � H�辭e����cft - attempting to continue 2023-06-05T09:14:41.484 DBG ../../nat/traversal/pinger.go:400 > Unexpected message: !�B�{pߴ2��8�$y���cft - attempting to continue 2023-06-05T09:14:41.484 DBG ../../nat/traversal/pinger.go:400 > Unexpected message: !�B N-����������cft - attempting to continue 2023-06-05T09:14:41.484 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32 2023-06-05T09:14:41.485 DBG ../../nat/traversal/pinger.go:400 > Unexpected message: !�B4���I>�����cft - attempting to continue 2023-06-05T09:14:41.485 DBG ../../nat/traversal/pinger.go:400 > Unexpected message: !�B��ew&�&wO�8���cft - attempting to continue 2023-06-05T09:14:41.490 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32 2023-06-05T09:14:41.493 DBG ../../nat/traversal/pinger.go:400 > Unexpected message: !�Ba�8��J�[��cft - attempting to continue 2023-06-05T09:14:41.494 DBG ../../nat/traversal/pinger.go:394 > Remote peer data received, len: 32 2023-06-05T09:14:41.497 DBG ../../nat/traversal/pinger.go:400 > Unexpected message: !�B�P�/�l������k�cft - attempti

@engageub
Copy link
Author

engageub commented Jun 7, 2023

Tried enabling UDP over proxy and replicated the code mentioned which was throwing the following error in mysterium.
The error indicates that some response has been received otherwise it will timeout.

Code link for the Error Message: https://github.com/mysteriumnetwork/node/blob/master/requests/dialer_swarm.go

Error message in Mysterium logs:

2023-06-07T01:37:42.904 WRN ../../requests/dialer_swarm.go:293 > Failed to lookup host: "discovery.mysterium.network" error="lookup discovery.mysterium.network on 1.1.1.1:53: dial udp 1.1.1.1:53: operation was canceled"
2023-06-07T01:37:42.979 WRN ../../requests/dialer_swarm.go:293 > Failed to lookup host: "location.mysterium.network" error="lookup location.mysterium.network on 1.1.1.1:53: dial udp 1.1.1.1:53: operation was canceled"
2023-06-07T01:37:42.979 WRN

Replicated code below on ubuntu docker container through tun2socks

package main

import (
        "context"
        "fmt"
        "net"
        "time"
)
func main() {
        hostname := "location.mysterium.network"
        ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
        defer cancel()

        ipAddresses, err := net.DefaultResolver.LookupHost(ctx, hostname)
        if err != nil {
                fmt.Println("DNS lookup failed:", err)
                return
        }

        for _, ipAddress := range ipAddresses {
                fmt.Println("IP address:", ipAddress)
        }

}


The above code was successful in receiving the response and there were no errors through proxy or without proxy.

However dialer_swarm.go is throwing this error when proxy is used on container. If PORT 53 is by passed, it won't show any UDP DNS errors.

There seems to be problem with the mysterium container environment when it works with tun2socks.
It looks like the code is conflicting with tun2socks code when the response is received for DNS and other UDP requests when proxy is used.

Thank you

@xjasonlyu
Copy link
Owner

Haven't found any issues yet. Can you provide tcpdump/wireshark info about DNS query with tun2socks?

@engageub
Copy link
Author

engageub commented Jun 7, 2023

Pcap files.zip

Please find the attachment for PCAP files

@engageub
Copy link
Author

engageub commented Jun 7, 2023

Please find the attachment for only UDP logs. This also has packets where garbage values were received as mentioned above.
Only UDP PCAP files.zip

Thank you

@engageub
Copy link
Author

engageub commented Jun 8, 2023

Installed the replicated code in Mysterium container which runs on Alpine. Installed go language and build the replicated network test file and the request was successful with response inside the mysterium container.
Only when running it with mysterium code it is not working as expected when using proxy. Garbage or truncated responses seems to be received by the application.

Thank you

@engageub
Copy link
Author

engageub commented Jun 8, 2023

If you would like to replicate this issue, simply run the following commands with socks5 proxy. Default DNS may not work. So please use the following to add DNS severs.

sudo docker run --name tunmyst  --restart=always -e LOGLEVEL=debug -e PROXY=YOUR_SOCKS5PROXY  -v '/dev/net/tun:/dev/net/tun' --cap-add=NET_ADMIN -d xjasonlyu/tun2socks
sudo docker exec tunmyst  sh -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf;'
sudo docker exec tunmyst  sh -c 'echo "nameserver 1.1.1.1" >> /etc/resolv.conf;'

sudo docker run -d --cap-add=NET_ADMIN --network="container:tunmyst" -v $PWD/mysterium-data/node:/var/lib/mysterium-node  --restart unless-stopped -p 4449:4449 mysteriumnetwork/myst:latest service --agreed-terms-and-conditions

Thank you

@engageub
Copy link
Author

engageub commented Jun 8, 2023

Analyzing the UDP Packets. The packets on eth0 interface have bad checksum for DNS requests where as on tun0 interface there are no bad checksum.

IncorrectCheckSum

@engageub
Copy link
Author

engageub commented Jun 9, 2023

Tried enabling UDP over proxy and replicated the code mentioned which was throwing the following error in mysterium. The error indicates that some response has been received otherwise it will timeout.

Code link for the Error Message: https://github.com/mysteriumnetwork/node/blob/master/requests/dialer_swarm.go

Error message in Mysterium logs:

2023-06-07T01:37:42.904 WRN ../../requests/dialer_swarm.go:293 > Failed to lookup host: "discovery.mysterium.network" error="lookup discovery.mysterium.network on 1.1.1.1:53: dial udp 1.1.1.1:53: operation was canceled" 2023-06-07T01:37:42.979 WRN ../../requests/dialer_swarm.go:293 > Failed to lookup host: "location.mysterium.network" error="lookup location.mysterium.network on 1.1.1.1:53: dial udp 1.1.1.1:53: operation was canceled" 2023-06-07T01:37:42.979 WRN

Replicated code below on ubuntu docker container through tun2socks

package main

import (
        "context"
        "fmt"
        "net"
        "time"
)
func main() {
        hostname := "location.mysterium.network"
        ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
        defer cancel()

        ipAddresses, err := net.DefaultResolver.LookupHost(ctx, hostname)
        if err != nil {
                fmt.Println("DNS lookup failed:", err)
                return
        }

        for _, ipAddress := range ipAddresses {
                fmt.Println("IP address:", ipAddress)
        }

}

The above code was successful in receiving the response and there were no errors through proxy or without proxy.

However dialer_swarm.go is throwing this error when proxy is used on container. If PORT 53 is by passed, it won't show any UDP DNS errors.

There seems to be problem with the mysterium container environment when it works with tun2socks. It looks like the code is conflicting with tun2socks code when the response is received for DNS and other UDP requests when proxy is used.

Thank you

The error message "operation was canceled" is present in wireguard service netstack.go file. Providing insights for further investigation on this issue.

https://github.com/mysteriumnetwork/node/blob/4daf90583afda92c68410ea35ed159132b629b9c/services/wireguard/endpoint/netstack/netstack.go#L517

Thank you

@engageub
Copy link
Author

engageub commented Jun 23, 2023

Adding to my observation further on this.
Bypassing both udp and tcp on tun0 protocol works fine without any issue. It means that packets going through eth0 directly have no issue inside the container but when they are intercepted to redirect through proxy, UDP requests have problem.
Also there is a particular scenario in the UDP requests where data length 44 is received when both tcp and udp are bypassed where as when it is used with tun0 protocol the remote data received has length 32. It looks like the code has a logic somewhere which is limiting the data to 32 while using UDP via tun0 interface.

If you look at the pinger.go buffer length for UDP is 64.
Is your code limiting this buffer somewhere processing the request.
The actual length received with direct connection is 44 where as with tun0 interface the length received is 32.

Thank you

@SkullFace141
Copy link

any update on this issue?

@engageub
Copy link
Author

engageub commented Jul 12, 2023

@xjasonlyu,
This has been tested with gluetun with OVPN config and was working fine with UDP and earnings were visible. Could you please look into their UDP config and your UDP config to understand if there is a difference in the way you handle the UDP connections.

Thank you

@xjasonlyu
Copy link
Owner

@engageub Thanks for your information. But to be frank, I’m kinda confused about the needs and issues.

Is it possible you can check the relevant code and show me the right/expected behavior?

@engageub
Copy link
Author

@xjasonlyu,
Thank you for the response. To summarize the issue,

  1. UDP connections are not successful while receiving the data.
  2. The actual length of data received by the application with direct connection is 44 whereas when using tun2socks it shows 32 and garbled characters are shown in mysterium logs instead of OK message.
  3. I purchased socks5 proxies with UDP support and it had the same issue. So the dante server configuration that I personally created was not a problem.

It's not impossible to debug but it takes time to figure out where the issue is. I need to understand go language, advanced networking and then your code. I also need to setup your code and test it. All this takes lot of time depending on the complexity of the code.

The following tests need to be performed on the code. If you are already aware of the following please let me know.

  1. Is it possible to add a simple check to compare the length of receiving packet from the network and after it is being sent to eth0 adapter. Can this logic be added in DEBUG mode.
  2. If the data length is greater than 32, is it received the same or is it getting truncated somewhere in the code.

Thank you

@xjasonlyu
Copy link
Owner

Thanks for your summary! It’s kind of strange that I never have UDP receiving issues because I frequently use DNS on it.

But yes, you can add some test code to debug:

  1. try to add debug logs in https://github.com/xjasonlyu/tun2socks/blob/main/core/device/iobased/endpoint.go file, it controls the input and output of ip packets from net interface. maybe you can find something useful. It should be noted that if you’re using linux/amd64, you may need to config the core/device/tun package (by changing the build tags) to force this software use wireguard tun instead of gvisor.
  2. We don’t truncate packets in this project, but it may be truncated by the net system due to IP segment mechanism.

@engageub
Copy link
Author

Hi @xjasonlyu ,
Frankly speaking, I do not have much idea of what you are mentioning about. I have not done programming in networking.
I only know the basics of protocols. I have not heard of gvisor earlier.
I was looking for ways to earn income and among the available docker containers this is the only one that requires UDP traffic. It has potential to earn about $100 per IP address by sharing internet connection. This can be seen from the leadership dashboard https://mystnodes.com/leaderboard
If you could kindly look into it, I appreciate your efforts on this.
I have not created a release yet in my script, since this use case is still pending and it would be complete once the UDP protocol is successful. There won't be any other use case left if this connection is successful.
I see you are engaged into this script for more than 2 years, so you would have better understanding and it would be much quicker if you can look into it.

What I searched for in the code was the number 32 since the data was truncated to that number to see if this was limited somewhere. I saw lot of lines with that number. I do not know the reason of using that number instead of 64.

Although this is working with gluetun, it does not have the capability to use plain socks5 proxies.
The code for Internet Income script has dependency on this currently to redirect traffic through plain proxies.

I appreciate if you could look into this to resolve UDP issues with mysterium containers.

Thank you

@Tutez64
Copy link

Tutez64 commented Jul 18, 2023

Hi @xjasonlyu, do you have any update or plan about this? I'm having the same issue and a fix would be greatly appreciated.

@xjasonlyu
Copy link
Owner

Hi, from what I understand, I think this issue maybe related to NAT types. There is an old version of tun2socks that uses ConeNAT, and I feel it may work: https://github.com/xjasonlyu/tun2socks/releases/tag/v2.3.2

@Tutez64
Copy link

Tutez64 commented Jul 18, 2023

I tried this version and it unfortunately seems to not make any difference

@xjasonlyu xjasonlyu added bug Something isn't working help wanted Extra attention is needed labels Jul 18, 2023
@xjasonlyu xjasonlyu self-assigned this Oct 23, 2023
@inkpool
Copy link

inkpool commented Nov 20, 2024

@Tutez64 Did you happen to resolve this issue? I think I met similar issue. If not, is there any better solution around?

@Tutez64
Copy link

Tutez64 commented Nov 20, 2024

@Tutez64 Did you happen to resolve this issue? I think I met similar issue. If not, is there any better solution around?

I wasn't able to. From what I heard and experienced, Mysterium simply doesn't work with proxies.

@engageub
Copy link
Author

Most of the apps do not require UDP apart from DNS requests. This is the opportunity to resolve UDP issues.
The responses have to be compared from proxy and direct connection. VPN connections using gluetun are also working fine with Mysterium. Someone has to look into the difference and update the code accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

5 participants