I'm going to use Socket.IO to handle websockets or XHR-polling to implement a realtime app
which is on the top of node.js.
Many people are so into proxying their node.js server and
I don't understand the true meaning of proxy except security reasons.
Is there other reason to set proxy to handle node?
I'm currently using nginx 1.1 as a webserver and proxy server.
Unfortunately, I have found that nginx 1.1 can support HTTP 1.1 but not websockets.
Should I just use Socket.IO without proxying?
Or If I really need to do it so, how can I set up proxying websockets with nginx or other alternatives?
You may have noticed that you can only run one server on any given TCP port. If you want to use node.js and any other web server, then you'll want to have a proxy server to send client requests to the correct backend server.
Related
I'm really confused with reverse proxy. What i understood is in forward proxy the client know the destination server but the server doesn't know the client, in reverse proxy the server knows the client but the client doesn't know the "server" he's visiting is a actually proxying to some other server. And to use the reverse proxy you can use NGINX. But if we can use that, why do express framework middlewares like http-proxy-middleware
exist?
and if my understanding of proxy and reverse proxy is wrong please correct me
Lets take an abstract example:
You will agree that you must be using port 3000 or something to run NodeJS... Right?
And lets say you also use angular/react or html+css to run your frontend website which is lets say on port 4200 (default for angular).
Now what if you want to have only one server and want two different services (frontend in angular and backend in nodejs) to run on that single server only.
So you need something in between your client and server to distinguish between the requests whether to forward them to angular or nodejs or any other service as well that is running on the same server.
What reverse proxy such as NGINX will do is you will define some rules on the basis of which the administrator of the server can utilize same server to serve various services.
This is the simplest example I can think of on the top of my head.
I would like to add a some real time data updates using push to an existing CakePHP application. It seems to me that websockets are the best way to do so and from what I've read, the easiest way to start using websockets is with node.js. Now the issue I have is that my application server is very very limited portwise and there is virtually no way to change that.
I have apache currently running on *:80 and *:443 and sslh listening on port *:4433. Requests from the outside are sent to my server on :4433 and sslh takes care of handling ssh and https traffic, however on the inside, all my clients machines are using :443 directly. I could potentially open more ports for inside clients, but from outside, there is currently no way to do this. Most of my clients connect from the inside network, but more and more are using the application from outside too.
Note that port 80 is only used to redirect users entering http://example.com to https://example.com as all my services are encrypted. So if node.js was able to to send every http request to https and use port 80 for secure websockets, this would work too!
My question: Is it possible to run Apache and Websockets (probably in the form of Node.js) on the same port, and have either Node.js working as a proxy for Apache or Apache working as a proxy for Node.js?
I'm running a real time web application which uses Symfony 2 PHP framework on the backend. I want to implement websockets for my real time interaction. Is it possible to install a node.js server on the same machine as my Symfony 2 server to handle websocket connections? If so, is it standard to open another port (say 81) to handle the websocket connection?
Yes, it is possible. Why not? It's just another application.
As for the second question. You can either open another port, which is easy to handle (WebSockets are not limited with cross-origin policy) but you may lose some data (cookies) or you can put a proxy which will send HTTP requests to web server and WS requests to Node.JS server. The latter can be recognized by having special header Upgrade: websocket. Either way WebSocket server has to listen on different port (unless you are developing application entirely in Node.JS).
I want to ask about some good practices. I have a Node.js (Express) web server and socket.io push server (in case technology matters). I can turn both of them into one application but I want them separated (they can communicate with each other if necessary). There are two reasons to do that:
It will be easier to manage, debug and develop the app;
It will be a lot easier to scale the app. I can just add another instance of push server or web server if necessary;
This is at least what I believe. The only problem is that when a client connects to the seperate socket.io server then it won't send cookies (different port, cross-domain policy).
The workaround I came up with is to put a reverse proxy (written in Node.js as well) in front and check what kind of request we are dealing with and send it to web server or push server accordingly. Great, now we have cookies in both web server and push server. The reverse proxy can be a load balancer which is an additional bonus.
It looks like a good idea to me. What do you think about this design? Perhaps any other workaround for cookie problem?
I recently did something simular, we initially used a node.js reverse proxy but ran into reliability/scalability problems. We found serving static files and proxying requests was best left to nginx. haproxy is also a very viable solution for stand alone proxying as well.
HaProxy
Nginix as a reverse proxy
I have two different applications on the same server. One of them is running on the 80 port (mydomain.com), another on the port 443 (sub.mydomain.com) and has wildcard certificate.
The first application is only for information purposes and don't need websockets support.
The second application should have secure websockets support (wss protocol).
I tried to set up juggernaut gem (for websockets) for my rails app with nginx server on the engineyard cloud, but i have one problem. Engineyard cloud provide only two opened ports: 80 and 443. I know that nginx do not fully support http 1.1 reverse proxing, so i can't use proxing from nginx for redirects websockets requests to the specific local port (in my case this port is 8080).
I tried use HAProxy and it's work for me when i use only unsecure websockets, but i need to support secure websockets. As i know in this case i should use something like STunnel for tunneling my https request and than use HAProxy, but when i test it - i saw that the server has to work several times slower and i still did not work to use the secure socket connection :(
Maybe I'm doing something wrong? Maybe someone will tell how to set up nginx for multiple applications (one of them should work via https) and secure websockets using only two ports (80 and 443).
p.s. Also i used a node-http-proxy, in this case i was able to set up proxy for different nginx applications but i do not get run websockets (happened only for "handshake" via nginx, not for "switching protocols")
I did some research on the various reverse proxies and websockets not too long ago. The bottom line is that websockets is new, and the reverse proxy support for it is very poor right now.
The recommendation I saw and I agree with is that you should run your websockets on a different stack than the rest of your items. That usually means putting it on a separate domain or subdomain.
You still have to deal with the complexities of getting the reverse proxies working, but it will be less complicated if you don't have to worry about breaking the other stuff.
Also, I agree that maybe you'll get better answers at serverfault or superuser.