I have an application written in node js which is using multiple outgoing IPs to make requests to website. Basically, I want to make requests to some website with different IPs, which I have assigned to my VPS. I want to containerize my application with docker, but docker only has access to the main IP from the host machine and I was not able to assign my other IPs to that container. I've tried many things and been stuck on that problem for days...
IPv4 address for enp1s0: xxx.xx.xxx.70
IPv4 address for enp1s0: xxx.xxx.x.145
IPv4 address for enp1s0: xxx.xxx.xxx.154
I need to be able to select one of these IPs from node js application inside docker container and me an HTTP request with it.
Also if what I said is not possible it is also acceptable to assign each IP to each container (so 3 containers in total with different outgoing IPs). But this is not ideal. I'm looking for a painless way of doing this since I want to be able to pre-configure my apps and docker-compose to automate most of this work.
Related
Note that I'm newbie in Argo tunnel
I have several servers in different locations and only a set of them exposed to the internet. (Lets say there is two sets, backend and frontend.
I want to coonect from frontend to backend using Argo tunnel like they are on the same machine. Lets say, I have setup Nginx with proxy_pass http://backendde:8008! Now, backendde is a name for one of my backends. (or set an IP like 10.25.36.1)
Maybe, my question can be: "How can I expose one server to another using Argo tunnel?"
Note that I cant edit /etc/hosts (its a PaaS/kubernate) so, I have to assign a fixed local IP for each instance.
I have this
/root/cloudflared tunnel --no-autoupdate run --token $TUNNEL_TOKEN --protocol http2 $TUNNEL_NAME
and I dont know how to set a fixed IP for my instances and then call this IP from another server (with argo setup)
Thanks in advance
what you want to do is called "connecting private networks" on Argo tunnels lingo.
You will need to deploy a "WARP" client on one end, and a cloudflared daemon on the other end, to be able to route traffic on the cloudflare edge network (without exposing the host to the internet).
See details here:
https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/private-net/connect-private-networks/
I have a project at http://github.com/KPB-US/dispatch-display and in it I determine what [fire] station a display [client] belongs to based on it's ip address when the socket connects.
I have a docker composition set up so it can be up and running easily using docker-compose build and docker-compose up. This builds an nginx server that points to the upstream node application running on a node server.
Everything runs fine when I specify network_mode: "host" in both services in the docker-compose.yml. I'd rather have it run more securely using dockers default network mode-- but when I do that, I get NATed [I guess, see example message below] ip addresses from the clients instead of their actual ip addresses, so I don't know which stations [clients] they are coming from.
A connection attempt was made but that remote address is not
registered in the STATIONS list - 172.18.0.1
I am determining the incoming IP address based on the x-forwarded-for header and if missing, then the socket.conn.remoteAddress.
Any ideas on how I can resolve this? I'd like to keep the containers isolated from the network (as mentioned above) but yet still be able to know where the incoming connection is really coming from.
I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.
I have a simple NodeJS site running inside a Docker container, with its ports mapped to port 80 on the host. I have a domain pointing to the IP of the EC2 instance, and everything is working as expected.
If I want to run another, separate NodeJS site from a Docker container on the same instance, how can I map specific domain names to specific Docker containers?
Eg, let's assume the IP of my EC2 instance is 22.33.44.55 and my domains are domain-a.com and domain-b.com. My dockerized NodeJS applications are siteA and siteB.
domain-a.com is configured to point to 22.33.44.55. siteA is listening on that IP address to port 80 - this is what I currently have.
domain-b.com is configured to point to 22.33.44.55. I want this traffic mapped to the siteB Docker container.
You could setup a nginx reverse proxy on the host and bind to seperate ports. The question/and answer on this article explain it quite nicely so I won't repeat it all:
https://www.digitalocean.com/community/questions/how-to-bind-multiple-domains-ports-80-and-443-to-docker-contained-applications
I have a server that has multiple ip addresses. It also has multiple web apps written in nodejs that I'm running.
In short I'd like to forward specific ip addresses to specific node applications. How is this done? Should I use apache or nginx?
running centos 7
Any running application can bind to either 0.0.0.0 (to catch all the packets) or to some specific IP (it must use this ip when it calls bind() on the socket). Thus almost any TCP/IP server application has option like "Listen" in it's config file or command line, so you should read your docs carefully to figure out how can it be made in your particular application. For Apache it's port-based virtual hosts.
If you want to route user requests to different application based on user's IP address, you should read about iptables nat table and REDIRECT target.