-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Cronicle shuts down randomly despite I am running simple Python scripts via "Test Plugin" #862
Comments
Well Cronicle isn't "crashing". It's receiving a SIGTERM signal to shut down:
So, it shouldn't have anything to do with the number of users or browsers you have open. Cronicle can handle thousands easily. There is some problem with your systemd configuration:
This is the smoking gun right here:
So, systemd is failing to "START" Cronicle, but you said it was up for 12 hours before this happened? So something is VERY wrong with systemd. It doesn't realize that Cronicle has started successfully and is running happily, so it shuts it down. It sounds like it's still WAITING for it to start, and eventually gives up after 12 hours. I have no idea why systemd would do this. I think the problem MAY be in your environment variables:
Environment variables are always interpreted as strings, so these values CANNOT be set to zero in this way. The reason is, the variable comes into Node.js as "0" (with quotes), which is a TRUE value. So Cronicle is ENABLING echo and foreground modes!!! Since you have your systemd service type set to "forking", this is a big problem, because you have Cronicle launching itself in FOREGROUND mode, which will NOT fork. Please try removing these environment variables entirely. They default to disabled anyway, so there is no need, and no way, to set them to "0". |
Thanks for your insight Btw, I had same problem when I had that default Cronicle systemd config
Any idea why would it happen with this systemd config? |
I think you have those environment variables set somewhere else (like in the container host or container config, or even Try grepping the Cronicle log for "Applying env config override:" and see what's happening on startup: grep 'Applying env config override:' /opt/cronicle/logs/Cronicle.log |
above command results nothing, maybe because it has output from only current cronicle that was started, not the all previous ones also
gives empty outputs any idea how should I catch this issue when it crashes next time? any logs that I should enable? |
Restart the service (or reboot your server, so systemd starts Cronicle), THEN grep the log. The key is to find out what happens at startup. Check if Cronicle is forking, or not.
If you see this, it means that Cronicle is starting up as a forking daemon (which is what you want). If you DON'T see this, then somehow Also look for this after a reboot: grep 'Applying env config override:' /opt/cronicle/logs/Cronicle.log |
Ok I restarted entire server
empty results
gives https://pastebin.com/0yNh81NX even though no job running at this moment I can see this in
|
@iamumairayub
If you stop cronicle from UI, will systemd restart it? |
Now that we have CRONICLE_echo=0 so nothing helpful. |
@jhuckaby ok so it crashed again
last few lines from
|
Huh, I've never seen a core dump before. How strange. Is there a crash log?
|
@jhuckaby I am using default This is the only Cronicle server that is crashing I have Cronicle on other servers as well, they never crash despite having 100+ jobs Only difference between those vs this server, this is being accessed by 20 different users, all users keep their browsers open to monitor their event logs |
I've never heard of Cronicle core dumping before. Some internet research has suggested that this could be an OOM (out of memory) event on that server. Having 20 users watching live logs will cause an increase in memory usage. Although you did say you have 32GB of RAM (is anything else running on that server that could have eaten up all the memory?), so I am really struggling to find potential causes. Here are some things to try: https://chatgpt.com/share/67a8e226-0128-8006-b71a-d7ba57747e6c I'm particularly interested in any core dump files you can find on that server, and what they contain. Quoting from that ChatGPT conversation above:
coredumpctl list
coredumpctl info 1089
coredumpctl gdb 1089
I assume you should replace Good luck! |
I have aerokube/selenoid on this server, which sometimes runs 15 concurrent sessions, I just ran them now, and here is CPU/RAM usage, where RAM always stays low like this, but CPU sometimes spikes up to 90% I have also installed
I checked your chatgpt link, and tried every command as suggested, all commands return no results except
Question, is there way to disable live cronicle logs? |
Thanks, so it died due to signal 6 (SIGABRT). I can't find anything else useful in that core dump, alas. What Does Signal 6 (SIGABRT) Mean?Signal 6 (SIGABRT) is an abort signal, typically triggered by:
So my theory of a OOM doesn't sound very likely anymore, because you didn't see any evidence of an OOM event, and that would probably result in a SIGKILL, not a SIGABRT. The fact that Cronicle's own crash log ( I honestly don't know what else to try here. This is very mysterious, and not something I have ever seen from Cronicle before. And I've been running it in a live production environment with hundreds of servers since 2014. Up 24x7, thousands of jobs per day, and not a single crash. I will say that I have only JUST begun testing my apps on Node.js v22. There may be something weird going on with v22 and Cronicle, but I can't imagine what that would be. You could try downgrading Node to v20 or v18, but that's really grasping at straws. I can't think of any other cause. I'm very sorry this is happening 😞
Not officially, no. I suppose you could try setting the I'm sorry I wasn't of more help. This is a very unusual issue. |
Let see what information
Dont be sorry sir, you have made our lives easier by developing this Cronicle. I am grateful to you :) Thanks for your time. I might post more comments if it crashes again |
Is there an existing issue for this?
What happened?
I have a simple Python script that makes a couple of requests to an API after every 30 seconds
It uses "Test Plugin" of yours
Not a CPU/RAM intensive script at all
I have created 15 users in Cronicle
They all run their own job, and have Cronicle opened in their browser to see live output
These scripts are supposed to run forever
But, after like every 12 hours, Cronicle crashes
As said, no CPU/RAM spikes on server as this a simplest script
Is it because we have too many browsers opened to view logs?
Or what else can be reason?
OS: Ubuntu 20.04.6 LTS
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
RAM: 32 GB
My Cronicle service file
PS:
I have another server that runs like 150 concurrent Python scripts using "Shell Plugin"
It never crashes
Its on a very small server than the above mentioned
One thing is that, nobody has the browser opened to see those logs
I just open it once a day to see if any job failed
Node.js Version
v22.13.1
Cronicle Version
0.9.72
Server Setup
Single Server
Storage Setup
Local Filesystem
Relevant log output
Code of Conduct
The text was updated successfully, but these errors were encountered: