Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting MaxListenersExceededWarning and then port7777 shutting down #36

Open
kimberlymcm opened this issue Sep 20, 2022 · 16 comments
Open

Comments

@kimberlymcm
Copy link

It connects and then it immediately shuts down after this MaxListenersExceededWarning. The exact same process was working before. Thoughts on how to fix?

➜  layer_env git:(main) ✗ 7777 --verbose
Using the AWS region us-west-2.
Validating the 7777 license.
Generating unique RSA keys for the SSH tunnel.
Checking if the port 7777 is available.
Port 7777 selected.
Listing databases.
Which database would you like to connect to?

[1] database2aug2022 
[2] staging 
: 1
Selected database database2aug2022.
Retrieving the availability zone and subnet of your instance.
Subnet on us-west-2d could not be found. Using us-west-2a instead. Consider creating a Subnet in the same Availability Zone as your RDS.
Checking if 7777 is set up in the AWS account.
7777 is already set up, moving on.
Retrieving the container security group.
Retrieving the computer's IP address.
Authorizing 67.58.234.130 on the security group.
Starting the Fargate container.
The IP address of the container is 54.187.174.180.
Starting the SSH tunnel to 54.187.174.180:22.

Tunnel created 🎉
Connect to the database on port 7777 on your machine.

Press Ctrl+C to stop and destroy the connection.
(node:22904) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [Client]. Use emitter.setMaxListeners() to increase limit
(Use `7777 --trace-warnings ...` to show where the warning was created)
The container is shutting down (Fargate task arn:aws:ecs:us-west-2:394462616482:task/7777Cluster/b7f9dfc74cf14128bf6348cb1915a95f).
Removing IP from security group
@mnapoli
Copy link
Member

mnapoli commented Sep 28, 2022

Hi @kimberlymcm, sorry for the delay.

Regarding the MaxListenersExceededWarning error, this is just a warning. It's good to know about it (thanks for reporting) but it shouldn't affect the execution and shouldn't be the problem here.

But that doesn't bring us closer to solving this 🤔 It's weird that the container shuts down immediately. Would you be able to open up the AWS Console and see if there are logs for the container?

You can open up the 7777 ECS Cluster in "ECS" or follow this link: https://us-east-1.console.aws.amazon.com/ecs/home?region=eu-west-1#/clusters/7777Cluster/tasks (change the region in the URL because opening it).

You could check out the logs, or also the "Stopped reason":

image

@slootjes
Copy link
Contributor

slootjes commented May 8, 2023

I experience the same thing and I found this environment variable set in the Fargate task (at the same place where AUTHORIZED_KEYS is set):

TERMINATE_AFTER_SECONDS: 7200

And indeed it quits exactly after 2 hours and it shows this in my console when the 2 hours are up:

(node:1) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [Client]. Use emitter.setMaxListeners() to increase limit
(Use `7777 --trace-warnings ...` to show where the warning was created)
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The tunnel has been closed (the container probably self-terminated after reaching its timeout).
The container is shutting down (Fargate task arn:aws:ecs:{region}:{accountId}:task/{clusterName}/{clusterId}).
Removing IP from security group

For me this is totally fine so I can't leave it running 24/7 by accident - having the option to adjust it to a different amount would be nice though.

@slootjes
Copy link
Contributor

Even with the latest version, this is still an issue and sometimes shuts down even after 10 minutes. Can this be solved please? It's quite useless if I am getting disconnected all the time.

@mnapoli
Copy link
Member

mnapoli commented Sep 11, 2023

Hey @slootjes, could you check out the logs and info I described in my comment above? That might help understand what is going wrong.

@slootjes
Copy link
Contributor

I just see Timeout: terminating after 7200 seconds as I stated before. The quicker disconnects I don't see logs for as it seems the container keeps running. On my terminal I see SSH tunnel error: read ECONNRESET. Overall 7777 for me is currently a very unreliable tool unfortunately.

@mnapoli
Copy link
Member

mnapoli commented Sep 11, 2023

I just see Timeout: terminating after 7200 seconds as I stated before.

I'm sorry I just want to make sure we are talking about the same thing.

In your previous comment you talked about the 7777 CLI output, not CloudWatch logs of the container running in AWS, correct? (I'm confused by the "as I stated before" and the fact that you shared a different log output)

But the new log line seems to be from CloudWatch, correct?

So I assume the 7777 container does not log anything else at all? Not even a log showing that your 7777 CLI client connected?

The quicker disconnects I don't see logs for as it seems the container keeps running.

That's why I'm asking to check in the AWS console to see the status of the container.

If the new log line you shared is actually coming from CloudWatch logs, it means the server runs correctly and it's not the server that is disconnecting. It is the CLI or the network in-between.

I'm sorry you're having such troubles, could you, by any chance, have any firewall set up? (I've seen issues with firewalls on Windows, though TBH that woludn't explain why it stops working at some point)
Could it be an unreliable network connection? If you SSH to any other server, does it keep working without issues?

As a last resort, I would try reinstalling 7777:

7777 uninstall

7777

(remember to set the correct region/profile)

(FYI if things get too desperate and you stop using 7777, please send us an email so that we refund the purchase)

@slootjes
Copy link
Contributor

There seem to be 2 issues:

  1. The container automatically stops running after 7200 seconds - Timeout: terminating after 7200 seconds can be seen in AWS Fargate logs.
  2. 7777 stops randomly after error SSH tunnel error: read ECONNRESET which is shown on my local PC. The container continues to run in AWS without logs or issues.

Since I'm running 7777 from Docker for security reasons, I can not uninstall it. I'm running the latest version published, I updated it using docker pull port7777/7777:1 this morning. I am using a stable 200mbit fiber connection that can stay connected for days to other machines over SSH. Windows Defender Firewall is on with default settings on Windows 11.

I really like 7777 and

@mnapoli
Copy link
Member

mnapoli commented Sep 11, 2023

The container automatically stops running after 7200 seconds - Timeout: terminating after 7200 seconds can be seen in AWS Fargate logs.

That is normal (this is to avoid unintended costs). You can extend beyond two hours via the --ttl option:

Screen-001341

7777 stops randomly after error SSH tunnel error: read ECONNRESET which is shown on my local PC. The container continues to run in AWS without logs or issues.

Got it. Do you get that often?

Since I'm running 7777 from Docker for security reasons, I can not uninstall it.

7777 uninstall is about uninstalling in the AWS account, not your machine.

@slootjes
Copy link
Contributor

Ah, the --ttl option looks very useful, thank you!

The random resets happened 3 times today, not reaching the 2 hours. I setup 7777 manually in my AWS account. I don't think this is the problem as one time today I had the 7200 timeout so that means the setup itself is stable, it's just getting a connection error somewhere randomly.

@slootjes
Copy link
Contributor

slootjes commented Sep 13, 2023

Can I help to debug this? I really want to use 7777 but after the 4th SSH tunnel error: read ECONNRESET in a hour I'm getting quite frustrated. Maybe this required it's own issue instead as it seems to be different from the original issue.

edit: it seems to happen when I'm idle for a while (but can happen as fast as a few minutes), not using the tunnel.

@mnapoli
Copy link
Member

mnapoli commented Sep 18, 2023

@slootjes yes maybe a new issue could be good. When you mention "idle", do you mean the laptop/PC goes to sleep? 🤔 that could be it maybe?

@slootjes
Copy link
Contributor

I mean that the tunnel isn't actively used, not doing queries against the database. Maybe it needs some kind of keep-alive or such? Maybe relevant (and stated before) is that I'm running 7777 from Docker.

@mnapoli
Copy link
Member

mnapoli commented Sep 18, 2023

@slootjes that is a very good point! I just released v1.1.14 and implemented a SSH keep-alive. Every 5 seconds, a packet is sent to try and keep the SSH tunnel alive.

Could you update and let me know if it helps?

@slootjes
Copy link
Contributor

@mnapoli I've pulled the updated container and will report back, thanks!

@slootjes
Copy link
Contributor

@mnapoli this seems to do the trick, it's stable now. Thanks a lot for your amazing support!

@mnapoli
Copy link
Member

mnapoli commented Sep 19, 2023

That's awesome news, thanks for testing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants