I don't have Sudo access, so currently i can't install 'Forever' https://www.npmjs.com/package/forever
Instead i am simply using 'Screen'.
I am running a node.js server, at a random point, the node server stops, and screen exits. I cannot seem to collect any error data on this. I seem to be completely unaware of why its happening and cannot think of a way to catch what is happening. It doesn't happen often (maybe 1 time per day). When i load putty back up and login to my Apache server through terminal, i type screen -x or screen -r and it tells me there are no screens attached. The node server process definitely stops because the app it runs stops working.
Obviously i can't post all the code here, there is tons of it. But everything appears to work wonderfully, except every now and then, something goes wrong and it closes the attached screen.
If there was a problem with the node server, i would expect a crash, and the attached screen would stay attached. There would be an error outputted to the terminal for me to see when i open it. But in this case, it totally closes the attached screen.
Does anybody know what kind of error can cause this?
On a side note, is there an alternative to 'Forever' that can be installed without Sudo access?
My node version wasn't correct which is why Forever wasn't installing. I didn't need SUDO after all. I am now using Forever and hopefully this will shed light on what is going on as i have a out.log file which should catch whatever the problem is. :-)
Related
I just installed all necessary dependencies on a new AWS EC2 Linux 2 server and launch my long_running_script.py. This script essentially performs a few operations then sleeps for a few hours on a never ending loop.
When I launched the script initially, I saw the correct output and all was fine. I disconnected from the instance, and when I reconnected I expected to see the same output as before.
Instead I can't seem to see any script output or see it running after typing in the 'ps aux' command.
Did disconnecting from the instance somehow abort the script? If so, how can I make sure it stays running?
Appreciate your help.
Did disconnecting from the instance somehow abort the script? If so, how can I make sure it stays running?
Yes it did. There are many ways to solve this. You can launch it using tmux or screen. Lunching your program in these "shells" will keep it running after you log out.
There is also nohup and pm2 which could also be helpful.
For some reason I often notice after having my terminal open for a while, I can't see the output from my nodejs server anymore. I know it's running, though. How do I re-open the node output info? (By output info, I mean "console.log" in nodejs code. I'm running on linux.
For anyone looking for an answer to this, my research conclusion is that it's an artifact of nodemon that exists. You have to physically go into the port and kill it, which I've spent an enormous amount of time doing. Not very helpful if you were running this in production and you couldn't see output anymore. But it is what it is.
I'm trying to create a simple Twitter bot to learn some Node.js skills.
It works fine on my local computer. I start the script with node bot.js and then close it with Ctrl + C.
I've uploaded the files to a server (Krystal hosting). I've ssh'd into the server and then used $ source /home/[username]/nodevenv/twitterbot/10/bin/activate. Which I think puts me into a Node environment (I'm not really clear what is happening here).
From here I can run node bot.js. My Twitter bot runs fine and I can leave the terminal. What I've realised now is that I don't know how to stop this script.
Can someone explain how I should be doing this? Is there a command I can enter to stop the original bot.js process? Since looking into this it looks like perhaps I should have used something like pm2 process manager. Is this correct?
Any pointers would be much appreciated.
Thanks,
B
You can kill it externally by nuking the process from an OS command line or in an OS GUI (exact procedure varies by OS). Ctrl-C from the shell is one version of this, but it can be done without the command shell that it was started in too by nuking the process directly.
Or, you can add a control port (a simple little http server running on a port only accessible locally) that accepts commands that let you do all sorts of things with the server such as extract statistics, shut it down, change the configuration, tell it to clear caches so content updates take effect immediately, etc... Shutting down the server this way allows for a more orderly shut-down from code within the server. You can even stop accepting incoming connections, but wait for existing http connections to complete before shutting down, close databases normally, etc...
Or, you can use a monitoring program such as PM2 or forever that in addition to restarting the server automatically if it should ever crash, they also offer commands for shutting it down too (which will just send it certain signals kind of like Ctrl-C does).
I have looked for answer for this question and have to find anything quiet like what I experience!
I set up an Ubuntu instance on AWS, standard configurations, medium tier.
I ssh into the server, and then I have a script running in Node, which I launch with sudo nodejs server.js .
When it runs, it has a few REST endpoints which work just fine, and those write to a mongodb (which, also works just fine). However, if I leave my computer and come back the next day, I get the standard SSH broken pipe, which is fine. But! When I try to use my REST api at that point, the node server.js is clearly not running.
I am the only person consuming this api, so I don't think errors are bringing it down.
Has anyone experience anything like this? Perhaps there is a configuration I am missing?
A friend just answered it for me,
sudo nodejs server.js & > mylog.out
This makes it run in the background and print the stdout to a log instead of nowhere (nowhere due to broken pipe)
I'm having a slightly weird, repeatable, but unexplainable problem with screen.
I'm using ansible/vagrant to build a consistent dev environment for my company, and as a slightly showy finishing touch it starts the dev server running in a screen session so the frontend devs don't need to bother logging in and manually starting the process, but backend devs can log in and take control.
However, one of the systems - despite being built from scratch - ends up with an immediately dead screen (it doesn't log anything to screenlog). Running the command manually works fine.
(the command being)
screen -L -d -m bash -c /home/vagrant/run_screen_server.sh
I've even gone to the point of nuking everything vagrant/virtualbox related on the system, making sure it's installing a clean, nightly box. Exactly the same source box works all the other machines.
Are there any other debugging steps I can be taking or is there something I'm missing?
I'm right now trying to do the same with my setup and hit the same problem.
Further testing has shown, that sleep 1 right after calling the screen helped. It seems the ssh script that ansible calls exits before the screen call is fully detached (or something else, that would explain that the sleep 1 helps)
I've also found Can't get Fabric's detached screen session example to work with the same suggestion.