Nginx + SSL + Rails + Juggernaut (Node.js) + Engineyard - node.js

I have two different applications on the same server. One of them is running on the 80 port (mydomain.com), another on the port 443 (sub.mydomain.com) and has wildcard certificate.
The first application is only for information purposes and don't need websockets support.
The second application should have secure websockets support (wss protocol).
I tried to set up juggernaut gem (for websockets) for my rails app with nginx server on the engineyard cloud, but i have one problem. Engineyard cloud provide only two opened ports: 80 and 443. I know that nginx do not fully support http 1.1 reverse proxing, so i can't use proxing from nginx for redirects websockets requests to the specific local port (in my case this port is 8080).
I tried use HAProxy and it's work for me when i use only unsecure websockets, but i need to support secure websockets. As i know in this case i should use something like STunnel for tunneling my https request and than use HAProxy, but when i test it - i saw that the server has to work several times slower and i still did not work to use the secure socket connection :(
Maybe I'm doing something wrong? Maybe someone will tell how to set up nginx for multiple applications (one of them should work via https) and secure websockets using only two ports (80 and 443).
p.s. Also i used a node-http-proxy, in this case i was able to set up proxy for different nginx applications but i do not get run websockets (happened only for "handshake" via nginx, not for "switching protocols")

I did some research on the various reverse proxies and websockets not too long ago. The bottom line is that websockets is new, and the reverse proxy support for it is very poor right now.
The recommendation I saw and I agree with is that you should run your websockets on a different stack than the rest of your items. That usually means putting it on a separate domain or subdomain.
You still have to deal with the complexities of getting the reverse proxies working, but it will be less complicated if you don't have to worry about breaking the other stuff.
Also, I agree that maybe you'll get better answers at serverfault or superuser.

Related

Best practice for communicating with a NodeJS server hosted locally from a Bluehost NodeJS server?

I have a web application running on a Bluehost server. I am trying to retrieve files hosted on a local server. On the local server, I have port forwarding and NodeJS listening on port 3000. I could do 80 as well, but from what I have read, that is not safe.
The issue I am running into is mainly the SSL cert for the local Node instance. The web application requires post requests to be made to https:// sources.
What are some best practice approaches to making this work? I have heard about installing Apache and running a ProxyPass to port 3000, but I am still concerned that the port 80 will have no SSL. Any help would be appreciated!!
First its worth noting that there are many approaches to hosting a web service.
Node can handle https connection, you should read the native https module documentation for how this works.
I tend to use Nginx (although apache is great and is a battle-tested solution) as a proxy server to node as, in general, I find it speeds up the process to get a product live. It also allows you to extract potential requirements from your node server, such as caching and SSL, so your node app can just focus on business requirements.
If you go for a proxy server, Nginx (and others), have modules that will handle SSL certificates. Lots of documentation online about how to set this up.
Something to keep in mind is that PORT 80 and 3000 are connection points for traffic. You will only be able to interact with the server on these ports if you bind and expose an application to them. If nothing is exposed to PORT 80, then connect attempts will just fail.
The best practices I tend to employ are:
No excuse not to use SSL nowadays, the standard is to expose https server on port 443.
If you choose to expose port 80, redirect all traffic to 443. This guarantees a secure connection.

NGINX, THe Edge, HAPRoxy

I was going through Uber Engineering website where I came across this paragraph and I it confused me a lot, if anyone can make it clear for me then I would be thankful to him/her:
The Edge The frontline API for our mobile apps consists of over 600
stateless endpoints that join together multiple services. It routes
incoming requests from our mobile clients to other APIs or services.
It’s all written in Node.js, except at the edge, where our NGINX front
end does SSL termination and some authentication. The NGINX front end
also proxies to our frontline API through an HAProxy load balancer.
This is the link.
NGINX is already a reverse proxy + load balancer, then from where HAProxy load balancer came in the picture and where exactly it fits in the picture? What is "the edge" he talked about? Either the guy who wrote his he wrote confused words or I dont know English.
Please help.
It seems like they're using HAProxy strictly as a load balancer, and using NGINX strictly to terminate SSL and for authentication. It isn't necessary in most cases to use HAProxy along with NGINX, as you mentioned, NGINX has load-balancing capabilities, but being Uber, they probably ran into some unique problems that required the use of both. According to the information I've read, such as http://www.loadbalancer.org/blog/nginx-vs-haproxy/ and https://thehftguy.com/2016/10/03/haproxy-vs-nginx-why-you-should-never-use-nginx-for-load-balancing/, NGINX works extremely well as a web server, including the use case where it is serving as a reverse proxy for a node application, but its load-balancing capabilities are basic and not nearly as performant as HAProxy. Additionally, HAProxy exposes many more metrics for monitoring, and has more advanced routing capabilities.
Load balancing is not the core feature of NGINX. In the context of a node.js application, usually what you would see NGINX used for is to serve as a reverse proxy, meaning that NGINX is the web server, and http requests come through it. Then, based on the hostname and other rules, it forwards on the HTTP request to whatever port your node.js application is running on. As part of this flow, often NGINX will handle SSL termination, so that this computationally-intensive task is not being handled by node.js. Additionally, NGINX is often used to serve static assets for node.js apps, as it is more efficient, especially when compressing assets.

Integrate websockets with apache

I would like to add a some real time data updates using push to an existing CakePHP application. It seems to me that websockets are the best way to do so and from what I've read, the easiest way to start using websockets is with node.js. Now the issue I have is that my application server is very very limited portwise and there is virtually no way to change that.
I have apache currently running on *:80 and *:443 and sslh listening on port *:4433. Requests from the outside are sent to my server on :4433 and sslh takes care of handling ssh and https traffic, however on the inside, all my clients machines are using :443 directly. I could potentially open more ports for inside clients, but from outside, there is currently no way to do this. Most of my clients connect from the inside network, but more and more are using the application from outside too.
Note that port 80 is only used to redirect users entering http://example.com to https://example.com as all my services are encrypted. So if node.js was able to to send every http request to https and use port 80 for secure websockets, this would work too!
My question: Is it possible to run Apache and Websockets (probably in the form of Node.js) on the same port, and have either Node.js working as a proxy for Apache or Apache working as a proxy for Node.js?

How to point different subdomains to different applications on the same server? (use node.js as a proxy?)

I'm setting up a node.js server but I would also like to have Apache running on there at the same time. Node is going to be the main website, and there will be subdomains that point to Apache.
The only way I can think of how to do this is have the different applications listen to different ports and then have a proxy application that listens to port 80 and then "redirects" the port according to the subdomain used. I'm not sure if this is the right way to do it, or how to do it if it is.
Research has shown me that it could be possible to use Apache as this proxy, though I would prefer it if I didn't have to. If I could somehow use node.js to do it, that would be fantastic (my preferred solution). If that is impractical/impossible, then of course I am open to other ideas.
I would really appreciate some guidance as to how to do this.
You wanted a solution that can serve both Node.js and Apache at the same time, and you wanted to have Node.js to do the reverse proxy. However, it is best to use a program that is designed for reverse proxy (Nginx, HAProxy) for that job. Using Node.js as a reverse proxy server will be inefficient.
Nginx is something I recommend. It is simple and highly efficient. You can have the Nginx server at the very front, taking in all the requests.
Here is how you setup Nginx to reverse proxy to Node
http://www.nginxtips.com/how-to-setup-nginx-as-proxy-for-nodejs/
And here is how to setup Nginx to reverse proxy to Apache
http://www.howtoforge.com/how-to-set-up-nginx-as-a-reverse-proxy-for-apache2-on-ubuntu-12.04
Simply combine the setting files of the two will enable you to serve apache and node at the same time.
Have a look through this thread
While it discusses some issues using http-proxy with WebSockets on <= 0.8.x, if you aren't doing that, you should be fine.
You can create a very basic proxy listener like so:
var http = require('http'),
httpProxy = require('http-proxy');
httpProxy.createServer(8888, 'localhost').listen(80);
And create a back-end server like so:
var http = require('http').listen(8888);
But of course, more complex implementations can be accomplished by reading the http-proxy documentation.

Hosting PHP and Node.js apps on the same server with multiple domains

I have a Linode VPS, currently running lighttpd to serve up my PHP websites and listening on port 80.
I'm also running Node.js, which listens on port 81, and uses websockets and HTTP to interact with the client.
There's a couple of different domains that I would like to point to this server. Ideally, I would like the domains which host the PHP sites to all talk to the same lighttpd server, and the sites which use node.js would somehow redirect to the port node.js is listening on unbeknownst to the client (e.g. no 30x redirect).
example-php1.com:80 -> linodebox:80 lighttpd /var/www/example1
example-php2.com:80 -> linodebox:80 lighttpd /var/www/example2
example-node.com:80 -> linodebox:81 node.js
Is there a way to do this, either by setting DNS entries or tweaking iptables? Does lighttpd need to be a proxy for node.js? The websockets feature needs to work without any fallbacks, and visiting a non node domain, e.g. example-php1.com:81, should not expose the node application.
I feel the perfect solution wouldn't require changes to existing application code nor require proxying between software web servers, but I could be wrong.
What's up Tom!?
I recommend HA-Proxy, it's one of the most high performance proxies out there and should accomplish what you're trying to do there.
I'm doing something similar with nginx acting as a proxy, it's easy but not the fastest.
HA-Proxy's website is here http://haproxy.1wt.eu
If you wanted a 'pure' solution, you could probably get the answer from looking at ha-proxy's source code. You can't really do it with iptables. Something has to read the HTTP header to determine where the request came from to route it locally.
I had basically the same problem and I ended up using node-http-proxy (also available in npm as http-proxy).
You just need a simple config file:
{
router: {
'example-php1.com': 'linodebox:80',
'example-php2.com': 'linodebox:80',
'example-node.com': 'linodebox:81
}
}
Then just run node-http-proxy --config options.json and you're set. If you want to run lighttpd and node on the same machine, you'll have to start lighttpd on a different port (I use 81 for php and 3000 for node - adjusting the config is easy). I also use forever to manage my node instances.
Ya'll are gonna hate me...
I ended up going with a second IP address, then followed the Linode tutorial to setup multiple static IPs. Then, I configured lighttpd to bind to one IP address and Node.js to bind to another IP address.
This isn't a great solution as it doesn't scale.
Update: lighttpd 1.4.46 (released back in 2017) added multiple ways to accept WebSocket connections:
lighttpd mod_wstunnel
lighttpd mod_proxy
lighttpd mod_cgi

Resources