I'm looking for some ideas...
I have a series of robust node.js apps that need to be delivered to specific users (post authentication), virtually no file serving, only the initial delivery of the index. The rest of the communication is all done via socket.io.
ClientA (login) needs to be connected to an application on lets say :90001
ClientB (login) on :90002
ClientC (login) on :90003
*All HTTP/1.1 ws need to be secure
I have tried a few configurations:
stunnel/varnish/nginx
stunnel/haproxy
stunnel/nginx
I was thinking a good approach would be to somehow use redis to store sessions and validate against a cookie, however that would most likely be done by (using node) exposing node.js on the frontend.
questions:
What are the risks in using node-http-proxy as the front piece?
Is this something that i should deem possible (to have one piece that "securely" redirects ws traffic and manages specific sessions to many independent/exclusive backends).
I am aware that nginx 1.3 (in dev) is to support ws, is this worth holding out for?
Has anyone had any thorough experience with yao's tcp_proxy module for nginx (reliability / scalability)?
I can't say I have done this before, but I can offer some ideas perhaps:
1 node authentication server which takes login details and sets a cookie specific to the server the user should connect to. It then redirects to the index page at which point, haproxy can direct the request based on the cookie. See this question https://serverfault.com/questions/75385/is-there-a-way-to-configure-haproxy-to-send-traffic-based-on-a-cookie
Alternatively, you could have the above authentication on all servers instead of just one. Haproxy would have to be configured to balance across all nodes if there is no relevant cookie header. Each node would do the set-cookie + redirect and subsequent requests should end up on the specific node instance.
bts, haproxy 1.5 dev now has built in support for SSL, so no need for stunnel anymore.
Related
I have an API server running behind an nginx reverse proxy. It is important to have all requests to my API server be secured via TLS since it handles sensitive data.
I've setup nginx to work with TLS (LetsEncrypt) so that seems to be okay. However, requests from nginx to my API server are still insecure http requests (this is all happening across docker containers, by the way).
Is it a best practice to also setup https between the reverse proxy and the API server? If so, how would I go about doing that without over-engineering it?
It all comes down to how secure or paranoid you'd like your implementation to be. It may also depend on the type of data you're playing with. For instance: I'd definitely do this for credit card numbers or other sensitive information.
As the comments have already stated, you would typically terminate SSL connections at the front facing webserver, assuming the API backend is also inside your LAN, which you trust and control. If you want to go that extra mile, you could also set up SSL on the API backend. Details of how to do that depend on the software you're using on your backend.
If you do decide to implement SSL on the API backend, the setup would be similar to what you did to setup Nginx with SSL on the frontend, with the main difference being you don't need to use a public certificate on the backend. It can be self-signed, since no one else besides your web server will be talking to it. Then it's just a matter of fixing all the URIs in your code to use HTTPS.
Im wondering if there is a way to only accept http requests from target react-native app.
What i mean is, currently, my nodejs server (using express) accepts connections from any means (even postman requests). Is there a way to make server only listen to desired app?
Use some level of security tokens. I use Passport-JWT with JSON Web Tokens that are generated after the user logs in.
TCP/IP is client agnostic so the server will always allow connections from anyone not on an IP based blacklist. However you can easily use tokens, keyword parameters or other means to block clients you don't like. The question is whether the connection needs to be secure, hence some kind of token exchange or you just want to prevent accidental damage. curl is very helpful if you want to test what a client an do with your server.
I have a windows server 2012 VPS running a web app behind Cloudflare. The app needs to initiate outbound connections based on user actions (eg upload image from URL). The problem is that this 'leaks' my server's IP address and increases risk of DDOS attacks.
So I would like to prevent my server's IP from being discovered by setting up a forward proxy. So far my research has shown that this is no simple task, and would involve setting up another VPS to act as a proxy.
Does this extra forward proxy VPS have to be running windows ? Are their any paid services that could act as a forward proxy for my server (like cloudflare's reverse proxy system)?
Also, it seems that the suggested IIS forward proxy plugin, Application Request Routing, does not work for HTTPS.
Is there a solution for both types of outgoing (HTTPS + HTTP) requests?
I'm really lost here, so any help or suggestions would be appreciated.
You are correct in needing a "Forward Proxy". A good analogy for this is the proxy settings your browser has for outbound requests. In your case, the web application behaves like a desktop browser and can be configured to make the resource request through a proxy.
Often you can control this for individual requests at the application layer. An example of doing so with C#: C# Connecting Through Proxy
As far as the actual proxy server: No, it does not need to run Windows or IIS. Yes, you can use a proxy service. The vast majority of proxy services are targeted towards consumers and are used for personal privacy or to get around network restrictions. As such, I have no direct recommendations.
Cloudflare actually has recommendations regarding this: https://blog.cloudflare.com/ddos-prevention-protecting-the-origin/.
Features like "upload from URL" that allow the user to upload a photo from a given URL should be configured so that the server doing the download is not the website origin server.
This may be a more comfortable risk mitigator, as it wouldn't depend on a third party proxy service. A request for upload could be handled as a web service call to a dedicated "file downloader" server. Keep in mind that if you have a queued process for another server to do the work, and that server is hosted in the same infrastructure, both might be impacted by a DDoS, depending on the type of DDoS.
Your question implies that you may be comfortable using a non-windows server. Many softwares exist that can operate as a proxy(most web servers), but suffer from the same problem as ARR - lack of support for the HTTP "CONNECT" verb, which is used by modern browsers to start an HTTPS connection before issuing a "GET". SQUID is very popular, open source, and supports everything to connect to.. anything. It's not trivial to set up. Apache also has support for this in "mod_proxy_connect", but I have no experience in that and the online documentation isn't very robust. It's Apache, though, so it may be worth the extra investigation.
I have a standard LAMP EC2 instance set-up running on Amazon's AWS. Having also installed Node.js, socket.io and Express to meet the demands of live updating, I am now at the stage of load balancing the application. That's all working, but my sockets aren't. This is how my set-up looks:-
--- EC2 >> Node.js + socket.io
/
Client >> ELB --
\
--- EC2 >> Node.js + socket.io
[RDS MySQL - EC2 instances communicate to this]
As you can see, each instance has an installation of Node and socket.io. However, occasionally Chrome debug will 400 the socket request returning the reason {"code":1,"message":"Session ID unknown"}, and I guess this is because it's communicating to the other instance.
Additionally, let's say I am on page A and the socket needs to emit to page B - because of the load balancer these two pages might well be on a different instance (they will both be open at the same time). Using something like Sticky Sessions, to my knowledge, wouldn't work in that scenario because both pages would be restricted to their respective instances.
How can I get around this issue? Will I need a whole dedicated instance just for Node? That seems somewhat overkill...
The issues come up when you consider both websocket traffic (layer 4 -ish) and HTTP traffic (layer 7) moving across a load balancer that can only inspect one layer at a time. For example, if you set the ELB to load balance on layer 7 (HTTP/HTTPS) then websockets will not work at all across the ELB. However, if you set the ELB to load balance on layer 4 (TCP) then any fallback HTTP polling requests could end up at any of the upstream servers.
You have two options here. You can figure out a way to effectively load balance both HTTP and websocket requests or find a way to deterministically map requests to upstream servers regardless of the protocol.
The first one is pretty involved and requires another load balancer. A good walkthrough can be found here. It's worth noting that when that post was written HAProxy didn't have native SSL support. Now that this is the case it might be possible to just remove the ELB entirely, if that's the route you want to go. If that's the case the second option might be better.
Otherwise you can use HAProxy on its own (or a paid version of Nginx) to implement a deterministic load balancing mechanism. In this case you would use IP hashing since socket.io does not provide a route-based mechanism to identify a particular server like sockjs. This would use the first 3 octets of the IP address to determine which upstream server gets each request so unless the user changes IP addresses between HTTP polls then this should work.
The solution would be for the two(or more) node.js installs to use a common session source.
Here is a previous question on using REDIS as a common session store for node.js How to share session between NodeJs and PHP using Redis?
and another
Node.js Express sessions using connect-redis with Unix Domain Sockets
I set up a Node.js HTTP server. It listens to path '/' and returns an empty HTML template on a get request.
This template includes Require.js client script, which creates Socket.IO connection with a server.
Then all communication between client and server is provided by Web Sockets.
On connection, server requires authentication; if there are authentication cookies then client sends them to server for validation, if no cookies then client renders login view and waits for user input, etc.
So far everything works, after validating credentials I create a SID for user and use it to manage his access rights. Then I render main view and application starts.
Questions:
Is there a need to use HTTPS instead of HTTP since I'm only using HTTP for sending script to the client? (Note: I'm planning to use Local Storage instead of cookies)
Are the any downfalls in using pure Web Sockets without HTTP?
If it works, why nobody's using that?
Is there a need to use HTTPS instead of HTTP since I'm only using HTTP
for sending script to the client? (Note: I'm planning to use Local
Storage instead of cookies)
No, HTTP/HTTPS is required for handshake for websockets. Choice of HTTP or HTTPS is from security point of view. If you want to use it for simply sending script then there is no harm. If you want to implement user login / authentication in your pages then HTTPS should be used.
Are the any downfalls in using pure Web Sockets without HTTP?
Web sockets and HTTP are very different. If you use pure Web Sockets you will miss out on HTTP. HTTP is the preferred choice for cross-platform web services. It is good for document traversal/retrieval, but it is one way. Web socket provides full-duplex communications channels over a single TCP connection and allows us to get rid of the workarounds and hacks like Ajax, Reverse Ajax, Comet etc. Important thing to note is that both can coexist. So aim for web sockets without leaving out HTTP.
If it works, why nobody's using that?
We live in the age of HTTP, web sockets are relatively new. In the long term, web sockets will gain popularity and take up larger share of web services. Many browsers until recently did not support web sockets properly. See here, IE 10 is the latest and only version in IE to support web sockets. nginx, a wildly popular server did not support web sockets until Feb-March 2013. It will take time for web sockets to become mainstream but it will.
Your question is pretty similar to this one
Why use AJAX when WebSockets is available?
At the end of the day they were both created for different things although you can use web sockets for most, if not everything which can be done in normal HTTP requests.
I'd recommend using HTTPS as you do seem to be sending authentication data over websockets (which will also use the SSL, no?) but then it depends on your definition of 'need'.
Downfalls - Lack of support for older browsers
It's not used this this in many other situations because it's not necessary and it's still 'relatively new'.