Using NGINX to mask destination server(s) ip address in http traffic - security

So I'm relatively new to nginx but want to know how I can use a reverse proxy to mask the destination server(s) IP addresses. Here is how it works so far:
I have reverse proxy (rp), main server (ms), secondary server (ss)
So when using a nginx proxy currently I connect via rp---ms but via wireshark I see main server IP address as destination.
If main server passes me to secondary server for resource I see secondary servers IP address in wireshark.
What I want to develop is rp---ms----ss or rp---ss but to/from traffic only shows ip of the reverse proxy server outgoing and returning - is this possible?

Are you checking Wireshark from inside the Nginx network? It is obvious that packets forwarded from Nginx will have the main server IP address as destination, you just need to reconfigure the firewall so that only Nginx is accessible from outside.
Check this diagram that will help you understand how to properly design this.

Related

How to add the same port on different network interface on the same machine?

I have a fedora workstation with 5 physical network interfaces on it.
Four of the network interface have ip 10.10.10.11 10.10.10.12 10.10.10.13 10.10.10.14.
There runs a filerun serice(port 8081),a gitlab serivce(port 8082), and a transmission(port 8083) service via docker. On my mac, I could access 10.10.10.11:8081 or 10.10.10.12:8081 or 10.10.10.13:8081 or 10.10.10.14:8081.
What I want is to access filerun via 10.10.10.11:80, gitlab 10.10.10.12:80, transmission via 10.10.10.13:80. How to configure the network?
Thanks a lot.
You have to bind on the right IP address/interface each service using the same port.
Ports (UDP or TCP) have their own pool per IP address.
You can listen on the same port if you change IP address or protocol (UDP or TCP).
See: http://www.bleepingcomputer.com/tutorials/tcp-and-udp-ports-explained/
Alternatively you could use a webserver like NGINX as a reverse proxy and bind the services to a specific ip address and port. Here you would also benefit in caching content and web acceleration for improved performance.
Find out how to set it up: https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/

How does Host header help on a physical host hosting multiple Servers?

I have 1 single machine with an IP 1.2.3.4. This machine has 2 web servers and an ftp server:
Web Server 1 listens to port 82; the domain for it: ws1.example.com
Web Server 2 listens to port 83; the domain for it: ws2.example.com
FTP Server listens to port 21; the domain for it: ftp.example.com
This is what the DNS mapping looks like:
ws1.example.com CNAME example.com
ws2.example.com CNAME example.com
ftp.example.com CNAME example.com
example.com A 1.2.3.4
Case 1: I make a request at the browser URL ws1.example.com:82 and the DNS redirects me to example.com but with the Host header: ws1.example.com.
Case 2: I make a request at the browser URL ws2.example.com:83 and the DNS redirects me to example.com but with the Host header: ws2.example.com.
In both the cases:
the request ultimately reaches the same physical machine
when the request arrives:
In Case 1, the request arrives at this machine and the request is attended to by the application that is listening on port 82 i.e. Web Server 1.
In Case 2, the request arrives at this machine and the request is attended to by the application that is listening on port 83 i.e. Web Server 2.
The Host header, as I understand, is used to inform the receiving host to identify which server (from the multiple servers that this IP has been hosting) is this request meant for and accordingly directs the request to the appropriate application.
My question is:
In this example, what is the purpose of the Host header as the same physical machine with the same IP has multiple applications listening at their corresponding ports. Once the request reaches this machine, the appropriate port will anyway pick up and the other applications will ignore the request as the port does not match the request. So, what purpose is the Host header serving here when apprpriate ports are anyway doing their job, right and well?
Can I infer that
CNAMES
Multiple Web Servers behind a single IP
subsequent resolution of a particular user request to the appropriate Web Server with the Host header
make sense only when you are using something like a Reverse Proxy e.g. 1 machine interfaces with the client and redirects user requests to the appropriate web server on separate machines all listening on the same port e.g. 80, each in the network behind the reverse proxy in which case you have ws1.example.com and ws2.exmple.com both be redirected to the reverse proxy example.com and this reverse proxy now forwards it to the appropriate host based on the Host header?
No DNS redirections
First an important terminology fix:
There are no "redirects" in the DNS. In your case, the DNS is just use to map a name to an IP. Sometimes, because of CNAME, a name is mapped to another name which is then mapped to an IP. It does not matter if there are intermediate steps like that, at the end a name maps to an IP (or there is a DNS resolution failure)
This also means that if the URL has a specific port, then that is not changed, the final IP will be queried over the port mentioned in the URL.
Redirections are an HTTP level feature: when querying a webserver for https://www.mygreatsite.example/foo it will reply with an HTTP return code of 301, 302, 303, 307 or 308 and giving you (the HTTP client, aka the browser) the new URL to go to.
HTTP virtual hosting
In the good old days, IP addresses were plenty. If you were hosting both www.site1.example and www.site2.example on the same physical box you could attach one different IP address to each.
Hence, in that specific case, in a way, the HTTP host header is useless, the mere fact of connecting either to 192.0.2.37 or 192.0.2.42 already lets you know which site you want.
In fact in HTTP/0.9 there was no host header, as there were no headers at all.
But then, with mass virtual hosting coming into play, and IPv4 addresses becoming scarce, you could not anymore attach one single IP address per site, since it was also a waste.
So you had, through the DNS, either directly or indirectly (CNAME records), both websites resolving to the same IP.
Hence when the HTTP client connected to the server, the server by default has no way to know which website do you want. That is why the HTTP host header filled by the client lets the server know which website you want to access, irrespective to its IP address, that was resolved earlier through the DNS.
By default HTTP uses port 80, so it is often not visible in the URLs.
Of course if you forced your clients to use http://www.site1.example:4569 on one side and http://www.anothersite2.com:9873 on another side, then you are right the host header would not be really needed.
Except that the plan falls down for many reasons:
Port numbers are not an infinite space either and many of them are already used typically for other things; so even if you extend this scheme at one point you could not attach new websites to the same IP
But more important than the previous technical point, for humans this will be a nightmare and many people will use forget the port number and then not coming to the appropriate website.
Hence typically it is not done like that, if you want to expose some given service over HTTP but in a non default port you typically install a reverse proxy in front of it. Or you do an HTTP redirection from http://www.coolpublicname.example/ to http://www.complicatedinternalname.example:9713, but then the client sees this naked truth.
HTTPS virtual hosting
In passing note that HTTPS added a level of complexity because the HTTPS webserver needs to send its certificate to the client, but since each website can have a different certificate it needs to know which website the client wants to use, which it could learn through the host HTTP header but then comes after the TLS handshake is finished, so in the early stage of the server sending a certificate this is not available yet.
So at the earliest times of HTTPS we were forced again to do IP-based virtual hosting and not name-based virtual hosting like it was possible in pure HTTP thanks to the host header.
The solution was found with a TLS extension, the Server Name Indication (SNI), something that the client sends early to the server and gives the website name, so that the server can send the appropriate certificate, and hence we are back in business in the name-based case where you can theoretically have an infinite number of names resolving to the same IP for them to be served by one given webserver.

How to expose tornado websocket from local machine

I have built a d3.js dashboard that ties into a tornado websocket. Everything works perfectly locally. I now would like to have the ability to share the websocket with a few friends, nothing production. Is there a way to do this without a big deployment on Heroku or other similar service? I've googled and can't seem to find an answer. Thanks
Not specific to Tornado. This is more of a networking question.
What you want to do is:
Run your server on your computer.
Connect to the internet.
Note down your public IP address.
Give your IP address to your friends.
Certain things you need to take care of:
Run your server on a higher, non-standard port (e.g. 8000 would be good) because ISPs block traffic to port 80 and other standard ports.
The IP address assigned to you by your ISP will most probably be dynamic. That mean, every time you diconnect and reconnect to the internet, your IP address will change.
Turn off your computer's firewall to let in the traffic at whichever port your server is running.
Finally, you'll need to configure port forwarding on your router. What that means is all the incoming HTTP requests will arrive at your router at your public IP address. But your computer where you'll be running your server will have an internal IP address assigned by your router. So, you'll need to forward incoming requests to your computer's internal IP.

Nodejs and Wordpress both port 80 virtualhost configuration on Mac

I am currently running my node.js web app on port 80 on my Mac with domain www.aaa.com,
But now I want to add a Wordpress(Apache) on 80 on this Mac machine too with domain www.bbb.com,
how do I configure the virtualhost? I tried many researches on the internet, but no luck , can anyone tell me how? Thanks!
If you can have multiple public IP addresses, you just need to:
map each of the domains to a different IP address
have node.js and Apache listen on one of the IP addresses each
If not (you only have a single IP address), you'll need to have one of the two servers take port 80 and forward/proxy the data to the other (listening on a separate port) for its requests. Or, alternatively, use a reverse proxy (such as pound) to do this job (you then have the reverse proxy on port 80, and both node.js and Apache on other ports).

Node server fails to listen to public IP

I am trying to get my Node.js server to listen to a public IP so that I can access it on a different network than my home network.
I've purchased a domain and used a DNS host - right now I'm using No-IP and have downloaded their client to push my IP to their servers.
When I set the IP on No-IP configuration to my local IP I can use the domain name and hit my server on another computer on my network. But if I change this to my public IP and use the domain, the request hangs for about 10 seconds and then fails. I've set up port forwarding (I believe correctly) and opened inbound / outbound traffic on the port I'm listening to (not 80 right now). I even pulled my firewall completely.
I tried changing server.listen(4444) to server.listen(4444, '0.0.0.0') as I've seen all over the web. But this doesn't work.
Anyone have ideas out there? I feel like maybe my ISP is blocking it somehow? I'm fairly new to networking, so maybe I'm missing something critical?
Thanks!
server.listen(4444) should be fine. As long as you don't have multiple active network connections in your server, you don't need to specify an IP address. Port forwarding from your router (if configured correctly) will direct the request that came from to public IP address to the actual local IP address of your host.
Note that for port forwarding to work reliably, you will have to give your host a fixed private IP address (not a DHCP assigned address) so the IP address will not vary. Then, you configure port forwarding to that fixed IP address.
Then, you need to do some network debugging. From a computer outside your own network (e.g. something out on the internet), you should do a couple commands to your public DNS name:
ping yourserver.net
tracert yourserver.net
If your DNS entry is not working, ping should tell you immediately that it didn't find yourserver.net.
If the DNS entry is working, but the IP address can't be reached, then ping will tell you that the server is unreachable. At that point, you will know you have a networking issue with connecting to your public IP address from the internet.
If ping is initially finding your server, but packets aren't flowing properly, then either the ping results or the tracert results should give you an idea where to look next.
If ping and tracert are finding your public IP and packets are flowing to/from it, but you still can't connect to it with the browser, then you either don't have the IP address set correctly (so you're not connecting to the right server) or your node.js server isn't listening appropriately or you aren't using the right ip/port in the browser that represents the actual node.js process. If you suspect this to be the case, then back up and make sure you have everything working purely on your own private network where the browser tries to connect directly to the local IP address and port. When that is working, you will know the node.js server is working appropriately and you can move back to working on the public IP.
FYI, if you tell us what the public DNS name and public IP address is, we here can do a few steps of this debugging from our computers.
It may be that your router can only forward a port to a computer on your network, but not change the port when forwarding. If that's the case, then you have these options:
Put everything on port 4444. Have your server listen to 4444, specify 4444 in the port forwarding in the router and then put 4444 in the URL like http://thecastle.ninja:4444.
Set up the port forwarding for port 80, put your server on port 80. Change the port forwarding to port 80. Change your server to listen to port 80 (if your server is Unix, you will need elevated privileges to listen to port 80 directly). You should then be able to use a URL like http://thecastle.ninja.
Set up the port forwarding for port 80, put your server on port 4444 and use ip table settings to route 80 to 4444 on your server. This allows your server to run in the less privileged 4444 port, but lets the end-user use the default port 80. I have a node.js server on a Linux Raspberry Pi configured this way. You should then be able to use a URL like http://thecastle.ninja
Run a proxy on your server that will route port 80 to port 4444. This is probably more than you need, but nginx is a popular one and it can do port forwarding on the server.

Resources