EC2 ubuntu launch node server on reboot not working - node.js

I'm trying to launch an express app when my ec2 machine starts. I've a startup script that is:
#!/bin/bash
echo "will reroute traffic" >> /home/ubuntu/log.logs
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo iptables -A INPUT -p tcp -m tcp --sport 80 -j ACCEPT
sudo iptables -A OUTPUT -p tcp -m tcp --dport 80 -j ACCEPT
echo "will kill node" >> /home/ubuntu/log.logs
if pgrep node &> /dev/null ; then killall -KILL node ; fi
if pgrep nodejs &> /dev/null ; then killall -KILL nodejs ; fi
echo "will run node server" >> /home/ubuntu/log.logs
cd server && npm install && npm run build && npm run start </dev/null &>/dev/null &
echo "has run node server" >> /home/ubuntu/log.logs
If I launch it from the console, it starts the server, exits and the server runs fine.
To launch it, I've added those lines to /etc/rc.local:
rm -f /home/ubuntu/log.logs
echo "will run" >> /home/ubuntu/log.logs
/bin/bash /home/ubuntu/startup.sh
echo "has run" >> /home/ubuntu/log.logs
After rebooting, the server is not responding and it looks like it has not started (the server logs ticks when running that are not there)
the output in log.logs looks fine:
will run
will will reroute traffic
will kill node
will run node server
has run node server
has run
so everything seems to have been executed, but the node app is not running, which I confirmed by running top | grep node that returns nothing.

I found that the cheap (or free) AWS VMs got CPU/network throttled causing npm installs etc. to fail. Maybe use a VPS that is a better value or try yarn. Also make it log the npm stuff to a file instead of dev/null.

It turned out that I installed npm and node through nvm, and that nvm adds a script to .bashrc that will load those libraries. To start my script on reboot, I was using cron that is not sourcing .bashrc. Additionally, the default .bashrc on AWS EC2 ubuntu instances starts with a check on wether it's been run from a terminal or not, and escape it it's not been run from a terminal. So sourcing it from cron has no effect.
I didn't see that since the failing line
cd server && npm install && npm run build && npm run start
was not logging anything
I ended up manually sourcing the path to npm and node

Related

How to start fastapi ,react, node server using shell script file

I need to run many commands one by one to start my project instead of that i tried to put commands on shell script file
server.sh
#!/bin/bash
sudo systemctl start elasticsearch.service
sudo systemctl start kibana.service
cd fastapi
uvicorn main:app --reload --port 8000
cd ..
cd reactjs
npm i
npm start
cd ..
cd node
npm i
npm run dev
These are commands I put it in a .sh file, now problem is after uvicorn main:app --reload --port 8000 this command sh files failed to execute rest of the commands.
how to resolve this using .sh file or yaml file
You must run in background the three main scripts in your code:
uvicorn main:app --reload --port 8000 &
npm start &
npm run dev &
That & is used after a command to run this one in background, so the script will not stop in the first command (avicorn) and it will follow with the code.
And because of those commands will generate an output in the terminal (in the case you are running them from it) that output can be confused, so I would recommend redirect the output to a file for every command you run in background.
Your code could be like this:
#!/bin/bash
sudo systemctl start elasticsearch.service
sudo systemctl start kibana.service
cd fastapi
uvicorn main:app --reload --port 8000 > uvicorn.log &
cd ../reactjs
npm i
npm start > npmstart.log &
cd ../node
npm i
npm run dev > npmdev.log &
For killing those process you should use the kill command.
There are several options for killing, you can visit kill signals on Linux to understand how it works.
Or if you want to use a GUI, the system monitor might work, but you must know what PID is what you want to kill.
When a process starts this saves in $! its current PID (process ID), so you can use the next statement to show the PID:
echo $!
And when you want to kill the process, use:
kill -9 the_pid_shown_with_echo
So your code could be like this (but it's not enough in this case):
#!/bin/bash
sudo systemctl start elasticsearch.service
sudo systemctl start kibana.service
cd fastapi
uvicorn main:app --reload --port 8000 > uvicorn.log &
echo "Uvicorn ID: $!" | tee uvicornpid.txt
cd ../reactjs
npm i
npm start > npmstart.log &
echo "npm start ID: $!" | tee npmstart.txt
cd ../node
npm i
npm run dev > npmdev.log &
echo "npm run dev ID: $!" | tee npmrundev.txt
The statements with tee command like echo "Uvicorn ID: $!" | tee uvicornpid.txt are used for showing the text in the terminal and also redirect the output to a file. So you can check those files later for checking the PID's
But as I said, in this case this is not enough because unless with node this runs various process and if you kill the process by using the PID you got with $! this will kill that process (and maybe other one). But the process which is listening in that port, will stay running and when you run the app again, this will crash because the port is in use (unless you run the app in another port, but I would not recommend it).
You can use several commands for getting the PID you should kill.
The first way is using commands like:
pgrep node
This will return all PID which match with node word
ps -efl | grep -E "(node|PID)"
This will return an output with several columns and you will can see the PID of all process which match with node word and another information that might be useful.
Other useful commands the might be better for you are these:
lsof -i :4000
This will return the process running in the 4000 port (you will get the PID, the name of the process and more information)
fuser 4000/tcp
This only will return 4000/tcp and the PID of the process running in that port.
So, once you get the PID with one of those methods, you should kill the process with the kill command as I explained before.

How to configure ports on apache server for iperf3

I'm using my apache server for running TCP and UDP traffic using iperf3.
I manually execute a command on my server to listen to a port.
~# iperf3 -i 5 -s -p 7759
-----------------------------------------------------------
Server listening on 7759
-----------------------------------------------------------
I'm wondering if there is a way to configure my apache server to have few ports (say 7760,7761,7762,...7770) permanently open on my apache server for iperf traffic so that I don't have to manually execute the aforementioned command to open the port for iperf traffic
The answer depends on the definition of permanently open.
If ports remaining open after you log out from your webserver is sufficiently good approximation of permanently open. Then all you need is start iperf with nohup command.
nohup iperf3 -s -p 7759 >/tmp/log 2>&1
See this question for more details on keeping backround processes after the shell that spawned them terminates. In particular, check out the answers that suggest using the screen command.
If you need iperf server to keep the ports open between reboots you need to configure the init process to spawn iperf3 at boot up time. For this you need root access to your webserver.
As root you could add the following lines to /etc/rc.local file
iperf3 -s -p 7759 > /tmp/iperf-7759.log 2>&1 &
iperf3 -s -p 7760 > /tmp/iperf-7760.log 2>&1 &
...
iperf3 -s -p 7760 > /tmp/iperf-7770.log 2>&1 &
See also this question on how to ensure a command is run every time the machine starts.

Vagrant SSH Tunnel, Node Debug, Automation wont work

I have a node server running within a vagrant. The script that starts node 'start.sh' can pick up a debug flag from a file, which I named debug.mode
On the local side I have a script 'startdebug.sh' which logs into vagrant over ssh, writes to the debug.mode file, restarts the script, waits till that is done, then tunnels 5858.
If I run the start.sh file with debug.mode containing '--debug' node opens 5858 and the port is available (I'm checking within vagrant using telnet).
If I do the same using startdebug.sh node says it's opened the debugging port, however the 5858 port it unavailable when I try telnet'ing within the VM.
Any idea? :)
startdebug.sh
/usr/bin/vagrant ssh-config > /tmp/vagrant-ssh-config
ssh -F /tmp/vagrant-ssh-config nodejs "cd /var/www/sportsbook-api && echo $mode > debug.mode && export TERM=linux && sudo ./scripts/restart.sh"
sleep 2.5s
ssh -N -F /tmp/vagrant-ssh-config -L 5858:127.0.0.1:5858 nodejs &
start.sh
if [ -e "debug.mode" ]; then
debug=$(cat "debug.mode")
echo "\nNode $debug mode activated."
fi
nohup node src/main/apps/api & echo $! > run.pid &

Running a script on each reboot of EC2

I am running an Amazon Linux EC2 instance. It is a node.js server. I need to run the following command on each reboot/startup:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to 8080
The command above redirects port 80 to port 8080.
How can I achieve this?
I've solved the issue by placing my script in the /etc/rc.local file
This file gets executed after all other init scripts which is what I needed.
After a lot of trials, below worked:
crontab -e
#reboot cd /home/ec2-user/somedir/ && ./run.sh > output1.txt
vi ./run.sh
./run2.sh 2>&1 > output2.txt &
./run2.sh
# this had actual commands, it also had a nohup command

Can I run Node.JS with low privileges?

I would like to run node with a low privileges user, is it possible? I need to use the framework Express.js
Yes. There are many solutions available to do this, depending on your exact needs.
If you want to run node on port 80, you can use nginx (doesn't work with WebSockets yet) or haproxy. But perhaps the quickest and dirtiest is to use iptables to redirect port 80 to the port of your choice:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8003
sudo iptables -t nat -L
When you’re happy, then save the config and make sure iptables comes on at boot
sudo service iptables save
sudo chkconfig iptables on
To automatically start your nodejs service as non-root, and restart it if it fails, you can utilize upstart with a script like this:
#!upstart
description "nodeapp"
author "you"
start on started mountall
stop on shutdown
# Automatically Respawn:
respawn
respawn limit 99 5
script
export HOME="/home/user/"
exec sudo -u user /usr/local/bin/node /home/user/app.js 2>&1 >> /home/user/app.log
end script
If you're on an Amazon EC2 installation, or you get an error that says sudo: sorry, you must have a tty to run sudo, then you can replace your exec command with this:
#!upstart
description "nodeapp"
author "you"
start on started mountall
stop on shutdown
# Automatically Respawn:
respawn
respawn limit 99 5
script
export HOME="/home/user/"
#amazon EC2 doesn’t allow sudo from script! so use su --session-command
exec su --session-command="/usr/local/bin/node /home/user/app.js 2>&1 >> /home/user/app.log" user &
end script
And, you didn't ask this question, but to keep it running forever, check out monit! Here is a useful guide to setting up node.js with upstart and monit.

Resources