BASH - why my NodeJS server script crashing? - node.js

Running CentOS (Linux signal-server 3.10.0-123.8.1.el7.x86_64). When i login to the server using SSH and then execute this following line manually it works, but crash when i exit my SSH connection to the server.
$ node /var/www/html/signal-server/avi.js &
This robot also fails to let the server run when i am not manually logged in via SSH:
#!/bin/bash
while :
do
# 1
avi=$(pgrep -f "avi.js")
if [[ "$avi" ]]; then
log1="1 - running avi.js $1 $2"
else
log1="1 - re-launch avi.js $1 $2"
ps aux | grep "avi.js" | awk '{print $2}' | xargs kill
sleep 1
# same if nohup also used
node /var/www/html/signal-server/avi.js &
fi
tt=$(date +"%Y-%m-%d %T")
echo "$tt" "$log1"
sleep 2
done
How can i make it run for-ever please? (i have also other services they are working fine only this one always crashing when i logout from the SSH session)

nohup will do what you want
nohup node /var/www/html/signal-server/avi.js &
You should really write a systemd script for you service, then if your server restarts your service will startup again.
This is a rough idea what such a script may look like
[Service]
ExecStart=node /var/www/html/signal-server/avi.js
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=node-avi
User=some-user
Group=some-group
Environment=NODE_ENV=production
[Install]
WantedBy=multi-user.target
Spend a bit of time reading the systemd documentation

Related

Graceful shutdown on Archlinux

Bringing this question straight from here.
I am running archlinux and I have a VM running often on it along with the system. Most of the time actually.
My goal is to produce the following behaviour:
A shutdown / poweroff / reboot / halt signal is sent to the system
No action other then trying to shut down the virtual machines gracefully
If the VMs are shut down gracefully after X seconds, proceeds with shutting down the host system too.
If not, execute a different command
Just give me a good idea on what to work on, because I don't even know from where to begin. I guess there is a call to the kernel that can be looked at.
Let me know.
My current code
At the moment I am using these scripts to gracefully shutdown my kvm virtual machines, and it works! But only as long as my user launches a shutdown or a reboot using his shell. Any other case wouldn't work.
These alias:
alias sudocheck="/bin/bash /home/damiano/.script/sudocheck"
alias sudo="sudocheck "
Are triggering this function:
#!/bin/bash
# This script checks for what is being passed to sudo.
# If the command passed is poweroff or reboot, it
# launches a custom script instead, that also looks
# fur currently running virtual machines and shuts them.
sudocheck() {
if [ $1 == "poweroff" ] || [ $1 == "reboot" ]; then
eval "sudo /home/damiano/.script/graceful $#"
else
eval "sudo $#"
fi
}
sudocheck $#
That launches this script if needed:
#!/bin/bash
i=0
e=0
## if virsh finds VMs running
virsh -c qemu:///system list | awk '{ print $3}' | \
if grep running > /dev/null ; then
virsh -c qemu:///system list --all | grep running | awk '{print "-c qemu:///system shutdown "$2}' | \
## shuts them dow gracefully
xargs -L1 virsh
## wait 30 seconds for them to go down
until (( i >= 30 || e == 1 )) ; do
## check every second for their status
virsh -c qemu:///system list --all | awk '{ print $3}' | \
if grep -E '(running|shutdown)' > /dev/null ; then
## keep waiting if still running
if (( i <= 30 )) ; then
sleep 1 && let i++ && echo $i
else
e=1 && notify-send 'Shutdown has been canceled' 'Please check the status of your virtual machines: seems like even though a stop signal has been sent, some are still running.' --urgency=critical
fi
else
## if no machine is running anymore, original power command can be executed
e=1 && eval $#
fi
done
fi
Systemd Unit
I also made the following draft, to manage the execution of my VM:
bootvm#.service
[Unit]
Description=This service manages the execution of the %i virtual machine
Documentation=https://libvirt.org/manpages/virsh.html
[Service]
ExecStartPre=virsh -c qemu:///system
ExecStart=virsh start %i
ExecStop=virsh -c qemu:///system
ExecStop=virsh shutdown %i
TimeoutStopSec=30
KillMode=none
[Install]
WantedBy=multi-user.target
But how can I tell the system to don't shut down the desktop environment, to stay as it is UNTIL the VM has been successfully shut down?
Because if the system can't shut down the vm, I want to do it while still in my DE. I don't want the computer to begin stopping all the services and remain hung until it just forces the shut down.

How do I prevent my Linux service (Ubuntu 16.04.5 LTS) from restarting when using loops that run in the background?

So I have a script setup that I want to run as a service. The script runs 8 curl commands continuously forever:
#!/bin/bash
threads=8
check_interval=15
batch_url=https://myurl.com
for ((i = 0 ; i < threads ; i++)); do
sleep $i
while true; do
sleep 0.5
has_items=$(curl --silent -m 15.0 --connect-timeout 1.5 $batch_url/$i)
if [ "$has_items" == "false" ]; then
sleep ${check_interval}
continue
fi
done &
done
I have a service file setup in systemd that looks like:
[Unit]
Description=Run the batch forever
After=network.target
[Service]
Type=simple
Restart=always
RestartSec=1
ExecStart=/bin/bash /usr/bin/my_script_location
[Install]
WantedBy=multi-user.target
If I simply run the script it works fine. It loops forever pinging URLs /0 through /7 forever.
However, if I set it up as a service, the service will restart after some time.
I believe the issue is related to having the while loop run in the background, but not sure how to make the service work with this. It's possible that the service approach is the wrong method. I simply need this script to always remain running.

Execute the last lines of shell script after SSH diconnection

I have a shell script that is used to setup the network settings on a linux machine in bash, it will mainly be used over SSH. Here are the last few lines of the script.
service network stop
rm -rf $NETWORKFILE
touch $NETWORKFILE
echo NETWORKING=yes > $NETWORKFILE
echo HOSTNAME=$HOSTNAME >> $NETWORKFILE
mv $ETHFILE /etc/sysconfig/network-scripts/ifcfg-eth0
service network start
As you can see, to apply the network settings it has to stop then start the network and apply the settings while the network is down. This would then cause the SSH session to be disconnected on the first line of the code I have shown and the script to thus stop and the settings to not be applied. How can I have the shell script run these last few lines after the SSH session is disconnected that started the shell script? Also, it needs to be done in the code and not through a screen or nohup command when starting the script.
Try completely disconnecting the script from the terminal, by redirecting all standard streams and putting it in background:
nohup script < /dev/null > script.log 2>&1 &
Also you can put "sleep 2" as the first line of the script, so that after putting the script in background, you can quickly disconnect cleanly, before the server closes it forcibly. This is just for convenience.
Maybe if you get your script PID, and then disown the process after stopping the network, your script will continue running after the ssh session is disconnected.
./script &
pid=$$
disown -h $pid
service network stop
rm -rf $NETWORKFILE
touch $NETWORKFILE
echo NETWORKING=yes > $NETWORKFILE
echo HOSTNAME=$HOSTNAME >> $NETWORKFILE
mv $ETHFILE /etc/sysconfig/network-scripts/ifcfg-eth0
service network start

Run file for game server

Alright, so I have a .sh file that I run that will launch my server with the certain specifics that I'm looking for. It launches the server through screen into it's own screen. Here's the code for my run.sh file.
#!/bin/bash
# run.sh
# conversion of run.bat to shell script.
echo "Protecting srcds from random crashes"
echo "Now launching Garrys Mod RequiemRP"
sleep 5
screen -A -m -d -S gmserver ./srcds_run -console -game garrysmod +maxplayers 32 +map rp_downtown_v6 -autoupdate
echo "Server initialized. Type screen -x to resume"
Usually I use a batch file to do this, but I'm now using linux for my server hosting. Part of that batch file was if srcds (the server itself) were to crash, the run.bat file would restart the server automatically. I'm looking to do this with my run.sh file, but I'm unsure how to.
Perhaps you could make a service or script that will periodically check if the process is running. This will check if it's on and if it isn't, it will turn it on when executed.
#!/bin/bash
ps cax | grep srcds > /dev/null
if [ $? -eq 0 ]; then
exit
else
bash /path/to/run.sh
fi
I tested the command and it works. For my virtualized debian 9 system.

Running Jconsole from a service: CentOS

I installed Tomcat on my CentOS 6.3 machine and I made it a service by creating the /etc/init.d/tomcat file.
It works with the basic start, stop, restart and status functionality just fine.
I use jconsole on the servers often, so I thought it would be nice to build this functionality into the service (by running service tomcat monitor) instead of having to run ps aux|grep java and then running the jconsole <Java PID> .
Here is my service script (Just the monitor section):
monitor)
# Check for Tomcat PID (greps are separated to prevent returning the single grep PID)
FOUND_PID=$(ps aux |grep $JAVA_HOME/bin/ | grep java |awk -F' ' '{print $2}')
if [[ $FOUND_PID ]]
then
echo -e $JAVA_HOME/bin/jconsole $FOUND_PID
$JAVA_HOME/bin/jconsole $FOUND_PID
else
echo -e "Failed: Tomcat is not currently running";
fi
;;
Everything inside of the monitor section works when I run the bash script directly it works, but when the service calls it, it just hangs at the jconsole line and doesn't do anything.
And when I run service tomcat monitor, I do get the correct path output, so I know that the path is correct.
Is there a way to get the jconsole to work when called from the services script?

Resources