Git hooks may need higher privileges - linux

I'm using node.js for my webserver, and I would like forever (or something like it) to run the server.
I'm also using git to manage the website. I have a bare repository on the server that I can push to/pull from on my local machine. I would like the repository to do three things when I push to it.
CD to my working directory (on the server)
Have the working directory pull from the bare repo
Restart the running webserver.
The following script seems like it's what I should use as a post-receive hook in my bare repo.
cd ~/site
git pull
sudo forever stopall
sudo forever start main.js
However, I don't think it's smart to have the git hook use sudo like that. The script needs elevated to run on port 80.
How should I be doing this? What should my git post-receive look like?
Thanks!

Well, for my particular case, it turns out I shouldn't be running node as a super user for security reasons. I wanted it to be elevated to run on port 80, but it didn't need to be elevated to run on port 8000.
So I forwarded port 80 to port 8000, and now am running node on port 8000. It still works identically to how it did before.
The command that I used in particular to forward port 80 to 8000 is
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8000

Related

Running nodejs app on Centos7 apache server

I'm trying to run a node web app (built with meteor) on a Centos7 server running EasyApache4 with WHM cPanel. I'm trying to run it on a subdomain off of one of our main websites on port 8080.
When going to the subdomain on port 8080, the connection just times out, but can see the html when using curl to access it.
Does anyone have any ideas why it won't work through the browser, and also how I can get it to look like it's running straight from the subdomain instead of having to go directly to the port.
EDIT
Below is the curl we are using to view the html
curl http://subdomain.site.com:8080
Doing that brings back the html no problems
Had the same problem today. I am using Memset Centos7 server with WHM/CPanel, running EasyApache 4.
After trying everything I could think of, I realised that I had a basic firewall setup, which closed all ports that were not listed. After adding port 8080, it worked.
Used this:
sudo iptables -I INPUT 1 -i + -p tcp --dport 8080 -j ACCEPT
I am not 100% certain how secure this is, as I am still researching.

Ubuntu: Http-server on port 80 starting up, but can't access from browser?

So I have a web application being run on an http-server via npm. In my package.jsonfile, I have the line "start": "sudo http-server -a [my ip address] -p 8065 -c-1", and my app runs fine when I go to http://myipaddress:8065. However if I change the 8065 to just 80, in the json file (which is what I want), I still get the success message:
Starting up http-server, serving ./
Available on:
http://myipaddress:80
But when I go to the link, chrome givess me an ERR_CONNECTION_REFUSED. Anybody know what's going on?
I would suggest there are three possible problems here.
Port 80 is already in use.
You are not running the application as root (you can't bind to ports <1024 if you are not root)
http-server isn't binding correctly
To check if port 80 is already in use try
netstat -lntu | grep :80
If port 80 is already in use you should see something like
tcp6 0 0 :::80 :::* LISTEN
You will need to close whatever is running on port 80 (apache? nginx?)
To check if you can actually bind to port 80, try running http-server from the console rather than via npm i.e.
sudo http-server -a [my ip address] -p 80 -c-1
If the above works you should be able to run npm as root to start your http-server i.e.
sudo npm start
You may need to remove sudo from your package.json:
"start": "http-server -a [my ip address] -p 8065 -c-1"
We need to make sure that http-server is working correctly on your system. We will test it with w3m a console based web browser.
You may need to install w3m with sudo apt-get install w3m if you do not have it already.
create a new directory. mkdir /tmp/testing
CD into new dir cd /tmp/testing
Start http-server with `http-server . -a localhost -p 1234
Visit http://localhost:1234 with w3m w3m http://localhost:1234/
Start http-server with `http-server . -a localhost -p 80
Visit http://localhost in a w3m w3m http://localhost/ does it work?
Quick tests:
Try to access this on as the localhost address, either localhost or 127.0.0.1 to shortcut any potential firewalls.
Try to telnet to this address on port 80 to see what the server replies (if any).
Do you have Apache installed? Are sure putting your application server on port 80 is not in conflict with Apache?
In that case it is better to redirect port 80 to your application server that just starting it on the Apache port.
Is it error 102? Check this link. Probably it's caused by some extensions you installed.
To run nodejs apps with pot less than 1000 you need a root access. Use sudo node app.js Also dont forget to open firewall. And make sure nobody else listening on port 80.

SCP File from local to Heroku Server

I'd like to copy my config.yml file from my local django app directory to my heroku server, but I'm not sure how to get the user#host.com format for heroku.
I've tried running 'heroku run bash'
scp /home/user/app/config.yml
I'm not sure how I can get it in the
scp user#myhost.com:/home/user/dir1/file.txt user#myhost.com:/home/user/dir2'
format
As #tamas7 said it's firewalled, but your local machine is probably also firewalled. So unless you have a private server with SSH accessible from the Internet, you won't be able to scp.
I'm personally using transfer.sh free and open source service.
Upload your config.yml to it:
$ curl --upload-file ./config.yml https://transfer.sh/
https://transfer.sh/66nb8/config.yml
Then download it back from wherever you want:
$ wget https://transfer.sh/66nb8/config.yml
According to http://www.evans.io/posts/heroku-survival-guide/ incoming connections are firewalled off. In this case you need to approach your local machine from the Heroku server.
heroku run bash
scp user#mylocalmachine:/home/user/dir/file.txt .
This is a bit late to answer this question, but I use services like localtunnel - https://localtunnel.github.io/www/ to copy files from local machine to heroku.
First, run a python HTTP server in the directory where the file is located.
cd /path/to/file
python3 -m http.server
This starts a server in port 8000. Configure localtunnel to connect to that port.
lt -s mylocal -p 8000
Now from your heroku machine, you can fetch the file via curl.
curl -XGET http://mylocal.localtunnel.me/myfile.txt > myfile.txt
You could also use a service like https://ngrok.com/ to open up a TCP tunnel into your local machine.
You will need to enable Remote Login as in simlmx answer.
On your local machine open the TCP tunnel just like this:
$ ngrok tcp 22
And then, on the Heroku console, just use SCP with the PORT and HOST that Ngrok provided.
$ scp -P [PORT] username#[HOST]:~/path/to/file.ext .
If you need to download your entire repo, for example to recover an app that you no longer have locally, use heroku git:clone -a myapp. Docs.
Expanding on tamas7's answer:
You can connect to your computer from the heroku server.
If your computer is behind a router, you'll also need to forward the connection to your computer.
1. You computer must accept ssh connections
On my mac it was as simple as enabling it in the Preferences / Sharing panel.
2. Your router needs to forward the connection to your computer.
Go to your router's settings page in your browser (typically 192.168.0.1 but varies depending on the router). Find the port forwarding section and forward some port to your computer on port 22.
This is how it looked on my tp-link:
Here I am making sure that port 22000 is forwarded to my computer (192.168.0.110) on port 22.
3. Find your external IP
Simply google "what is my IP".
4. Scp your file from heroku
heroku run bash
scp -P 22000 your_user#your_external_IP:/path/to/your/file .
5. Undo everything!
Once you're done it's probably good practice to disable the port forwarding and remote login.

How to run node.js as non-root user?

I'm running a node.js server, that will serve requests on port 80 amongst others. Clearly this requires the application running as root (on Linux).
Looking at this post (http://syskall.com/dont-run-node-dot-js-as-root) as an example it's clear that there are simple ways to allow node to be run as a non-root user, but I'm wondering if anyone has views on the advantages/disadvantages of the different methods suggested:
code: use setuid() to drop down from root to non-priviledged user after listening on port 80 is established.
using a proxy server of some sort to redirect requests to a port >1024 (and so not need node to run as root)
using IP tables to forward to another port (ditto node would not run as root)
Thanks
Option 1 requires you launch the node server as root. Not ideal.
Option 2 adds overhead to every handled request and adds another failure point to your stack.
Option 3 Is the simplest and most efficient method.
To implement Option 3, add the following to your system init scripts. (/etc/rc.d/rc.local on RedHat based systems like AWS).
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
That will redirect requests from port 80 to port 3000.
(I haven't got enough reputation to add a comment the the one of Matt Browne, so I write this as an answer. Feel free to edit.)
There is a simpler method to load iptables rules automatically after a reboot than the one described in the link of Matt Browne: One can install iptables-persistent from the repositories using apt-get:
apt-get install iptables-persistent
Rules still need to be saved manually like this:
IPv4:
iptables-save > /etc/iptables/rules.v4
IPv6:
iptables-save > /etc/iptables/rules.v6
(Source: http://www.thomas-krenn.com/de/wiki/Iptables_Firewall_Regeln_dauerhaft_speichern (german))
I love the simplicity of this workaround:
sudo setcap 'cap_net_bind_service=+ep' `which node`
It also works for programs other than nodejs btw.
Basically as 2nd parameter you type the path to the program executable (like /usr/bin/nodejs on Ubuntu), in the above case which node should provide it dynamically, thus making this work independently from Linux distro.
Beware though that when you upgrade nodejs or the executable gets overwritten for some other reason you would have to execute that same command again.
Sources:
How to: Allow Node to bind to port 80 without sudo,
Is there a way for non-root processes to bind to "privileged" ports on Linux?

How can I run node.js Express in production mode via sudo?

I'm using the npm package express version 2.5.2 with node version .0.6.5. I appear to be running bash version 4.1.5 on Debian 4.4.5.
I'm trying to run my server in production mode but it still runs in development mode.
I run these commands in my bash shell:
$ export NODE_ENV=production
$ echo $NODE_ENV
production
$ sudo echo $NODE_ENV
production
$ sudo node bootstrap.js
I have this code inside bootstrap.js:
var bootstrap_app = module.exports = express.createServer();
//...
console.log(bootstrap_app.settings.env);
and here's what I see printed to standard out:
development
Is this a problem with my usage, or my system?
EDIT:
Thanks to ThiefMaster for his properly identifying that this issue stems from my running node as root. ThiefMaster suggested using iptables to forward from port 80 to an unprivileged port, but my system gives me an error. Moving this discussion to superuser.com or serverfault.com (link to follow)
Most environment variables are unset when using sudo for security reasons. So you cannot pass that environment variable to node without modifying your sudoers file to allow that variable to passt through.
However, you shouldn't run node as root anyway. So here's a good workaround:
If you just need it for port 80, run node on an unprivileged port and setup an iptables forward to map port 80 to that port:
iptables -A PREROUTING -d 1.2.3.4/32 -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 2.3.4.5:1234
Replace 1.2.3.4 with your public IP, 2.3.4.5 with the IP node runs on (could be the public one or 127.0.0.1) and 1234 with the port node runs on.
With a sufficiently recent kernel that has capability support you could also grant the node executable the CAP_NET_BIND_SERVICE privilege using the following command as root:
setcap 'cap_net_bind_service=+ep' /usr/bin/node
Note that this will allow any user on your system to open privileged ports using node!
sudo NODE_ENV=production /usr/local/bin/node /usr/local/apps/test/app.js

Resources