Node Express Unix Domain Socket Permissions - node.js

I am running an nginx server and a node express web server, using daemontools, setup to communicate over Unix Domain Sockets. There's just a few problems:
The socket file stays present on shutdown, so I have to delete it when bringing the server back up, otherwise I will get the EADDRINUSE error.
The nginx server runs as the nginx user, and the node server runs as the node user.
The socket file gets created by Express when the server starts up and umask sets the permissions on the socket file to 755.
The setuidgid application sets the group to the default group of the user, both the node username in this case.
The deployment scripts for the application and daemontools' run script execute before the node server instance gets launched, so there's no way to set the permissions on the file, as it has to get recreated during the launch process.
If I chgrp and chmod g+w the socket file, everything works fine. Is there a way to set this up so that the node application's socket file gets generated with the correct permissions for nginx to be able to write to it without compromising the security independence of one application or the other? I would even be okay with adding nginx to the node user's group, if there was still a way to set the permissions on the socket file so that it would be group writable.

Maybe I am too late.
As a complement of your own answer there is a solution not having to add the nginx user to the node group.
Create a directory only for the socket file, assign it to the node user and www-data (or whatever group the nginx is) group and set the group-id bit (SGID) on that directory.
mkdir -p /var/lib/yourapp/socket
chown nodeuser:nginxgroup /var/lib/yourapp/socket
chmod g+rxs /var/lib/yourapp/socket
All files created inside this directory will automatically be owned by the nginxgroup group.

I was able to get it working by adding nginx to the node user's primary group:
gpasswd -a nginx node
And then starting the express server using the following:
// Create the server
fs.stat(listen, function(err) {
if (!err) { fs.unlinkSync(sock); }
http.createServer(app).listen(sock, function(){
fs.chmodSync(sock, '775');
console.log('Express server listening on ' + listen);
});
});
I don't really feel like this is a valid solution, just a hack. Express wasn't built with deleting and setting file perms in mind, and it especially bugs me to have to add the nginx user to the node user's primary group. If there were ever a compromise of the nginx account, the attacker could conceivably have access to all of the application's source, and an avenue to try endless attacks on the code using the socket. The best that I can do is set the umask to 077 for the node user and try to get 100% coverage with a chmod 600 on every file and chmod 700 on every directory, or set the group to the non-default for the user on everything.
That said, I would still appreciate any ideas.

#Bobby's answer left me with connect() to unix:/run/static0.sock failed (13: Permission denied) in nginx. Chmod 777 was the trick. Here's my solution [building on his]:
var fs = require('fs');
var http = require('http');
var process = require('process');
var express = require('express')
var app = express()
app.get('/', function (req, res) {
res.send('Hello World!')
})
var sock = process.argv[2];
fs.stat(sock, function(err) {
if (!err) { fs.unlinkSync(sock); }
http.createServer(app).listen(sock, function(){
fs.chmodSync(sock, '777');
console.log('Express server listening on ' + sock);
});
});
Run like:
$ node server.js /run/static0.sock
Express server listening on /run/static0.sock

I don't know if it's still relevant but i succeed to connect to the unix socket as the node client (server is in c).
With the user that runs the node i simply touched a new file s.sock for that matter, when my server starts it listens to this unix socket.
The issue for me was that i tried running my server as sudo, and then it changes the socket to root privileges.
Anyhow what worked for me was simply running the server without root privileges and then then when tried connecting to it from node i didn't got.
Error: connect EACCES
Hope it helps.

Related

nginx.conf for NodeJS/React App returning 502 and 405

Trying to setup a staging environment on Amazon LINUX EC2 instance and migrate from Heroku.
My repository has two folders:
Web
API
Our frontend and backend are running on the same port in deployment
In dev, these are run on separate ports and all requests from WEB and proxied to API
(for ex. WEB runs on PORT 3000 and API runs on PORT 3001. Have a proxy set up in the package.json file in WEB/)
Currently the application deployment works like this:
Build Web/ for distribution
Copy build/ to API folder
Deploy to Heroku with web npm start
In prod, we only deploy API folder with the WEB build/
Current nginx.conf looks like this
Commented out all other attempts
Also using PM2 to run the thread like so
$ sudo pm2 bin/www
Current thread running like so:
pm2 log
This is running on PORT 3000 on the EC2 instance
Going to the public IPv4 DNS for instance brings me to the login, which it's getting from the /build folder but none of the login methods (or any API calls) are working.
502 response example
I have tried a lot of different configurations. Set up the proxy_pass to port 3000 since thats where the Node process is running.
The only response codes I get are 405 Not Allowed and 502 Bad Gateway
Please let me know if there is any other information I can provide to find the solution.
It looks like you don't have an upstream block in your configuration. Looks like you're trying to use proxy-pass to send to a named server and port instead of a defined upstream. There's is an example on this page that shows how you define the upstream and then send traffic to it. https://nginx.org/en/docs/http/ngx_http_upstream_module.html
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
location / {
proxy_pass http://backend;
}
}````
Turns out there was an issue with express-sessions being stored in Postgres.
This led me to retest the connection strings and I found out that I kept receiving the following error:
connect ECONNREFUSED 127.0.0.1:5432
I did have a .env file holding the env variables and they were not being read by pm2.
So I added this line to app.js:
const path = require("path");
require('dotenv').config({ path: path.join(__dirname, '.env') });
then restarted the app with pm2 with the following command:
$ pm2 restart /bin/www --update-env

Run node.js server file using apache default port

I have this node.js server file:
var app = require('http').createServer(handler),
io = require('socket.io').listen(app),
fs = require('fs'),
app.listen(80);
function handler (req,res){
fs.readFile("/client.html"), function(err, data) {
if (err) {
console.log(err);
res.writeHead(500);
return res.end('Error loading client');
}
res.writeHead(200);
res.end(data);
});
}
is there a way to make this node.js file run automatically through the apache default port number when a client tries to connect without having to run it through the cmd ?
without having to run it through the cmd
Short answer: not quite. Think of this node.js file as creating a server on a par with Apache.
The script creates a server .createServer() and then tells it to listen to port 80 .listen(80).
Since socket.io is bound to this node server (and can't just be plugged in to Apache) you will have to execute the script (run it through the cmd) to be able to utilize it.
That being said:
I'm sure one could make a daemon (a background program) out of the node server; thus firing it up automatically on system start. If you then specify to run it on port xxxx you could tell Apache to map this port into its own local space (its folders). On a local machine this instruction would look like this: ProxyPass /app http://127.0.0.1:xxxx/
There would be two servers running on one machine; and Apache's http://127.0.0.1/app would redirect to the node server (listening to xxxx).
I would not advise to go down that rabbit hole just yet. To start having fun with socket.io on windows just create a batch file with the command to run your server: node [path/to/your/server_file.js] for ease of use. Expand your node script. And stop using Apache. (Express is a nice module to use node for web content...)

start Express in AWS EC2 without root is not reachable

I have deployed an Express application into EC2 instance but there is a weird problem. After SSH into the instance, If I start the server by
node server.js
it is not available through the browser;
If I start the server by
sudo node server.js
everything is ok.
Not suer why.
Ports less than 1024 are reserved for root, and thus require root permission.
My guess is that you are attempting to bind to ports 80/443, the default web ports. As such, this requires root permissions.
However, it is a bad idea to run your application as root, and so an alternative solution should be implemented.
sudo permission is required on low number port. you should use a proxy in front of your app; like nginx so that you can use low number port by redirect to your app's port.

Node js : works on local, but not on my server

When trying node on my webserver (hosted by some company), i realized that it doesn't work.
The issue i get is a timeout error.
On my local machine, the script works. On the server, the script doesn't work, but i confirmed that node works, with a 'hello world' program.
Here, to perform my test on the webserver, i use the simplest node program i can think of (beside 'hello world') :
Simple node program
var http = require('http');
var port = 8080;
console.log("*** Node script launched ***");
var server = http.createServer(function(req, res) {
console.log('Ok, server launched.');
res.writeHead(200);
res.end('Message from the server : ok!');
});
server.listen(port,'0.0.0.0',function(){
console.log((new Date())+' : OK. Server is listening.');
});
Edit : Corrected a typo in the program above. ==> Changed "Server.listen" into "server.listen". Thanks to #Num8er for noticing (unfortunately, this didn't solve the issue).
So after some research, i corrected one thing : specifying '0.0.0.0' as IP (before that, that part was left out). But it didn't solve my issue. And now, i can't find anything else that might be wrong (in the program. But i'm a newbie so...).
I suspect that the issue may come from my hoster, but i don't know how to pose a diagnostic on this situation.
Here is all the data i have :
Program output when launched on the webserver
*** Node script launched ***
Fri Aug 28 2015 01:45:00 GMT+0200 (CEST) : OK. Server is listening.
Output from browser (chrome)
This webpage is not available
ERR_TIMED_OUT
I have 2 questions :
Do you have an idea what the problem might be?
Do you know the steps i could take to be able to pose a diagnostic on this situation. Is there a way to tell if node is correctly listening? Is there a way to monitor if my client request gets to the server. If yes, is it blocked? Where is it blocked?
Thanks.
Loïc.
I think You're stopping application after You see the words:
"OK. Server is listening."
Code works without problem.
I believe that You're closing terminal or doing ctrl+c
Try to run the app using forever.
Install it using:
npm install -g forever
run Your app using:
forever start app.js
stop it using:
forever stopall
check status:
forever list
There is one more thing also.
If You using cloud services like C9.
so better change port line to:
var port = process.env.PORT || 8080;
Also, not sure about the webserver you're using, but the Port # might also be an issue.
I do know that on Heroku deployments I have to use something along the lines of:
var port = process.env.PORT || 8080;
This means that your application is fine but your port 8080 is not open in you server , Have you tried to navigate to http://serverip:8080 if it has ERR_TIMED_OUT problem , then the problem is in your port as I said.

Node.js Deployment in openshift

I was trying to deploy a Node.js application to the openshift as in this link here
I understand this code
var http = require('http');
var server = http.createServer(function(req, res) {
res.writeHead(200);
res.end('Hello Http');
});
server.listen(3000);
and there is no issue running it locally
$ node server.js // saved as server.js
However, how does this work when I commit this application in openshift? This is very simple code. I have some downloaded code that is a chat application and client-server need to configure to listen on some port (I was using port number 3000 in my localhost).
It works on port number 3000 in localhost but how can I make it to work in Openshift?
You need to listen on port process.env.OPENSHIFT_NODEJS_PORT. So something like this should work:
server.listen(process.env.OPENSHIFT_NODEJS_PORT || 3000);
See here for example: Error: listen EACCES on Openshift app
Hey the issue with socket.io is that you have that npm package installed local but not in openshift (dependencies don't get pushed). For that you can login thru ssh (look for "Want to log in to your application?" in right menu in openshift control panel, follow instructions and use the ssh connection provided) then login with terminal o Putty, and go to:
cd app-root/repo
or
cd $OPENSHIFT_REPO_DIR
and then
npm install socket.io
I've used that to install mongoose and other dependencies without trouble. Also you can use
node server.js
from command line to run the site ;)

Resources