docker nodejs nginx socket.io how to get actual ip of client not natted ip - node.js

I have a project at http://github.com/KPB-US/dispatch-display and in it I determine what [fire] station a display [client] belongs to based on it's ip address when the socket connects.
I have a docker composition set up so it can be up and running easily using docker-compose build and docker-compose up. This builds an nginx server that points to the upstream node application running on a node server.
Everything runs fine when I specify network_mode: "host" in both services in the docker-compose.yml. I'd rather have it run more securely using dockers default network mode-- but when I do that, I get NATed [I guess, see example message below] ip addresses from the clients instead of their actual ip addresses, so I don't know which stations [clients] they are coming from.
A connection attempt was made but that remote address is not
registered in the STATIONS list - 172.18.0.1
I am determining the incoming IP address based on the x-forwarded-for header and if missing, then the socket.conn.remoteAddress.
Any ideas on how I can resolve this? I'd like to keep the containers isolated from the network (as mentioned above) but yet still be able to know where the incoming connection is really coming from.

Related

Can I use Argo Tunnel to connect my clusters together without exposing to the internet?

Note that I'm newbie in Argo tunnel
I have several servers in different locations and only a set of them exposed to the internet. (Lets say there is two sets, backend and frontend.
I want to coonect from frontend to backend using Argo tunnel like they are on the same machine. Lets say, I have setup Nginx with proxy_pass http://backendde:8008! Now, backendde is a name for one of my backends. (or set an IP like 10.25.36.1)
Maybe, my question can be: "How can I expose one server to another using Argo tunnel?"
Note that I cant edit /etc/hosts (its a PaaS/kubernate) so, I have to assign a fixed local IP for each instance.
I have this
/root/cloudflared tunnel --no-autoupdate run --token $TUNNEL_TOKEN --protocol http2 $TUNNEL_NAME
and I dont know how to set a fixed IP for my instances and then call this IP from another server (with argo setup)
Thanks in advance
what you want to do is called "connecting private networks" on Argo tunnels lingo.
You will need to deploy a "WARP" client on one end, and a cloudflared daemon on the other end, to be able to route traffic on the cloudflare edge network (without exposing the host to the internet).
See details here:
https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/private-net/connect-private-networks/

Docker container with multiple outgoing IPs

I have an application written in node js which is using multiple outgoing IPs to make requests to website. Basically, I want to make requests to some website with different IPs, which I have assigned to my VPS. I want to containerize my application with docker, but docker only has access to the main IP from the host machine and I was not able to assign my other IPs to that container. I've tried many things and been stuck on that problem for days...
IPv4 address for enp1s0: xxx.xx.xxx.70
IPv4 address for enp1s0: xxx.xxx.x.145
IPv4 address for enp1s0: xxx.xxx.xxx.154
I need to be able to select one of these IPs from node js application inside docker container and me an HTTP request with it.
Also if what I said is not possible it is also acceptable to assign each IP to each container (so 3 containers in total with different outgoing IPs). But this is not ideal. I'm looking for a painless way of doing this since I want to be able to pre-configure my apps and docker-compose to automate most of this work.

Expose node js app with host as localhost on kubernetes

I have spent whole day looking for an answer why my node js app is not reachable from the internet when I run it on Kubernetes with loadbalancer as a service type. The solution was to change host from localhost to 0.0.0.0 in node js server app definition, but I still don’t know why?
I hope this saves one’s time in the future.
When you say that you set the host I guess you mean what the app is told to listen on in the listen function like express app server . listen all intefaces instead of localhost only You can bind to a particular address or leave it open to all by using the 0.0.0.0 mask (the zeros function as a mask for matching rather than a true IP address - https://stackoverflow.com/a/20778887/9705485).
I imagine you had your app running fine locally and were able to access it from your host machine with your localhost configuration. This would be because your local machine would be accessing it in a way that conforms to the mask. Your config was effectively saying only localhost can access this app. So when you ported to Kubernetes then your machine has to go over a network and the incoming connection was then your machine's external/public IP address, which then did not match the localhost mask. So to get it to work you changed the mask to make it open to all IPs.

How to expose tornado websocket from local machine

I have built a d3.js dashboard that ties into a tornado websocket. Everything works perfectly locally. I now would like to have the ability to share the websocket with a few friends, nothing production. Is there a way to do this without a big deployment on Heroku or other similar service? I've googled and can't seem to find an answer. Thanks
Not specific to Tornado. This is more of a networking question.
What you want to do is:
Run your server on your computer.
Connect to the internet.
Note down your public IP address.
Give your IP address to your friends.
Certain things you need to take care of:
Run your server on a higher, non-standard port (e.g. 8000 would be good) because ISPs block traffic to port 80 and other standard ports.
The IP address assigned to you by your ISP will most probably be dynamic. That mean, every time you diconnect and reconnect to the internet, your IP address will change.
Turn off your computer's firewall to let in the traffic at whichever port your server is running.
Finally, you'll need to configure port forwarding on your router. What that means is all the incoming HTTP requests will arrive at your router at your public IP address. But your computer where you'll be running your server will have an internal IP address assigned by your router. So, you'll need to forward incoming requests to your computer's internal IP.

Restcomm cluster: nodes list is empty in sip-balancer

i'm trying to create restcomm cluster: sip-balancer + a few restcomm instances. But i can't connect restcomm node and sip-loadbalancer.
i used this tutorial - http://docs.telestax.com/sip-servlets-clustering-high-availability/ however haven't got any result.
Seems it should be 2 steps
change path-name attribute in
standalone/configuration/standalone-sip.xml
add org.mobicents.ha.javax.sip.BALANCERS to
standalone/configuration/mss-sip-stack.properties
as i understand node and loadbalancer use rmi as channel. i see(i used netstat) that server listens port 2000 and node establishes connection to it.
but when i try to use loadbalancer from sip client it returns "error 500 - no available nodes".
also i used remote debugged - nodes list is empty.
have i missed something?
p.s. i used docker restromm instance and sip-loadbalancer on the same machine.
thanks,
so i have found my issue.
According to the log file on restcomm node - it can't connect to balancer by RMI.
Connection error is very strange - Connection refused to host: 127.0.0.1 and sometimes Connection refused to host: 127.0.1.1
yesterday i tired to specify java.rmi.server.hostname but it did not help me
today i created small RMI client to balancer and it worked from my local machine(balancer is hosted on it too). however this app did work from virtual machine. so i added more logs to code and found:
app can lookup remote been
remote endpoint of this been is 127.0.0.1, but should be ip address of remote machine
After that i specified externalHost and public-ip for my sip-balancer and got bean endpoint address with 127.0.1.1
so issue was found - ubuntu uses this "local" ip address for your "machine name".
you can find it in /etc/hosts.
sip-balancer(java application) gets it as ip address of endpoint for services
My fix is - change 127.0.1.1 to 127.0.0.1 in /etc/hosts. after that sip-balancer provides real ip address of your machine for remote objects.
Conclusion: my issue - wrong operation system :)
Common solution: developer should check address type and don't use loopback addresses.

Resources