I'm building a web application using the MEAN stack. The site contains authentication (using passport.js) so I would like to secure our connection with SSL/TLS.
For our deployment we're using nginx as a reverse proxy to the Node app running on the same AWS EC2 instance.
My question is: With my setup, what is the best practice way to setup an https (SSL/TLS) connection? Should I get a certificate and set it up at the nginx layer? Should I do it in my node app directly? Is there some other better way?
I've done some googling but haven't found anything profound. If anyone could point me to an article on the topic that would be very useful as well.
Thanks in advance!
First it's good to have SSL running on NGINX. So the communication is encrypted for the visitor in the first place (at least to the NGINX). If you're running Node on the same instance it's probably not absolutely necessary to encrypt also the traffic between Node and NGINX. But as soon as you have NGINX on another place running you should use SSL on Node too. As the data could potentially be accessed by Hackers.
Related
A React app and Nodejs server which is used to retrieve and manipulate the data are running on the same server. When accessing the app locally it workes fine, but when accessed externally the app is visible but without data. The reason behind this is that the port on which the application is running is open but the port on which the Nodejs server is running is not.
My question is this, what is the best way to solve this issue? The simplest solution would be to open up the other port, but I am assuming that is not the most secure solution.
Any suggestions would be appreciated.
Open to port for the outside world and implement a token-based request verification system.
You can implement CSRF token verification. It always checks that request comes from a trusted source only.
Do this using a reverse proxy server, like nginx, to listen to the open https port. The reverse proxy will handle the https encryption, rather than burdening your nodejs code with it. nginx is multithreaded and can do https efficiently.
The reverse proxy passes along requests to your http://localhost:3000 nodejs. In my experience, this arrangement works very well at large scale.
Explaining how to do this is too much for a stack overflow answer. But you'll find plenty of online advice.
I have an API server running behind an nginx reverse proxy. It is important to have all requests to my API server be secured via TLS since it handles sensitive data.
I've setup nginx to work with TLS (LetsEncrypt) so that seems to be okay. However, requests from nginx to my API server are still insecure http requests (this is all happening across docker containers, by the way).
Is it a best practice to also setup https between the reverse proxy and the API server? If so, how would I go about doing that without over-engineering it?
It all comes down to how secure or paranoid you'd like your implementation to be. It may also depend on the type of data you're playing with. For instance: I'd definitely do this for credit card numbers or other sensitive information.
As the comments have already stated, you would typically terminate SSL connections at the front facing webserver, assuming the API backend is also inside your LAN, which you trust and control. If you want to go that extra mile, you could also set up SSL on the API backend. Details of how to do that depend on the software you're using on your backend.
If you do decide to implement SSL on the API backend, the setup would be similar to what you did to setup Nginx with SSL on the frontend, with the main difference being you don't need to use a public certificate on the backend. It can be self-signed, since no one else besides your web server will be talking to it. Then it's just a matter of fixing all the URIs in your code to use HTTPS.
I want to ask about some good practices. I have a Node.js (Express) web server and socket.io push server (in case technology matters). I can turn both of them into one application but I want them separated (they can communicate with each other if necessary). There are two reasons to do that:
It will be easier to manage, debug and develop the app;
It will be a lot easier to scale the app. I can just add another instance of push server or web server if necessary;
This is at least what I believe. The only problem is that when a client connects to the seperate socket.io server then it won't send cookies (different port, cross-domain policy).
The workaround I came up with is to put a reverse proxy (written in Node.js as well) in front and check what kind of request we are dealing with and send it to web server or push server accordingly. Great, now we have cookies in both web server and push server. The reverse proxy can be a load balancer which is an additional bonus.
It looks like a good idea to me. What do you think about this design? Perhaps any other workaround for cookie problem?
I recently did something simular, we initially used a node.js reverse proxy but ran into reliability/scalability problems. We found serving static files and proxying requests was best left to nginx. haproxy is also a very viable solution for stand alone proxying as well.
HaProxy
Nginix as a reverse proxy
I have two different applications on the same server. One of them is running on the 80 port (mydomain.com), another on the port 443 (sub.mydomain.com) and has wildcard certificate.
The first application is only for information purposes and don't need websockets support.
The second application should have secure websockets support (wss protocol).
I tried to set up juggernaut gem (for websockets) for my rails app with nginx server on the engineyard cloud, but i have one problem. Engineyard cloud provide only two opened ports: 80 and 443. I know that nginx do not fully support http 1.1 reverse proxing, so i can't use proxing from nginx for redirects websockets requests to the specific local port (in my case this port is 8080).
I tried use HAProxy and it's work for me when i use only unsecure websockets, but i need to support secure websockets. As i know in this case i should use something like STunnel for tunneling my https request and than use HAProxy, but when i test it - i saw that the server has to work several times slower and i still did not work to use the secure socket connection :(
Maybe I'm doing something wrong? Maybe someone will tell how to set up nginx for multiple applications (one of them should work via https) and secure websockets using only two ports (80 and 443).
p.s. Also i used a node-http-proxy, in this case i was able to set up proxy for different nginx applications but i do not get run websockets (happened only for "handshake" via nginx, not for "switching protocols")
I did some research on the various reverse proxies and websockets not too long ago. The bottom line is that websockets is new, and the reverse proxy support for it is very poor right now.
The recommendation I saw and I agree with is that you should run your websockets on a different stack than the rest of your items. That usually means putting it on a separate domain or subdomain.
You still have to deal with the complexities of getting the reverse proxies working, but it will be less complicated if you don't have to worry about breaking the other stuff.
Also, I agree that maybe you'll get better answers at serverfault or superuser.
I have a VPS where I have hosted a few sites. All based on LAMP stack, so it was no big deal. They provide WHM/cpanel for managing different sites. I decided to try node.js, bought a separate domain for it, and I need some clue how to point that domain to the node.js application.
So here are the questions:
1) What is the best way to host node.js application on a specific domain without hampering the other sites? How will I configure the domain? Yes, I'd like to use default http port (80) for node.
2) As Apache is already listening to the 80 port, is it a good idea to use Apache mod_proxy for the purpose? I mean if I want to use websocket, will apache still use separate threads for maintaining connection to node?
PS. I have already seen this question, but the answers don't seem to be convincing.
Edit:
I forgot to mention, I have an unused dedicated IP for that VPS which I can use for node.js.
Follow these steps
Goto "WHM >> Service Configuration >> Apache Configuration >> Reserved IPs Editor" and then 'Reserved' the IP that you want to use for node.js. This will release the IP from apache.
Create a new DNS entry with a A entry like - example.com A YOUR_IP_ADDRESS
Tell the node.js server to listen to your IP using server.listen(80, "YOUR_IP_ADDRESS");
If Apache is already listening to port 80, then the only thing you can do is proxy to your node instance. And yes, apache will create a new thread for each connection.
As others have mentioned, there's not a whole lot you can do here. Apache is currently driving your server and node.js won't like riding shotgun.
I'd recommend checking out things like nodester, no.de, heroku, and so on.