Assign different public IPs to several instances of same Docker container - linux

I made a docker container where I set up python 3.6 and some specific software packages. There, I run an application that connects to a remote API service that has per-IP call-ratio limitations (i.e. it is not possible for an IP to send more than x calls per minute to the API service, otherwise it gets blocked). As a result, I want to use several copies of the same container, each one connecting with a different IP so that I can bypass that problem.
QUESTION
Is it possible to assign a public IP to a linux container? How can it be done to a docker container?Maybe via proxy?

In order to assign a public IP to a docker container you need to use Macvlan network driver for example:
Macvlan network driver can be used to assign a MAC address to each container’s virtual network interface, making it appear to be a physical network interface directly connected to the physical network.
This command will create a Macvlan network which bridges with a given physical network interface
docker network create -d macvlan -o macvlan_mode=bridge --subnet=172.16.86.0/24 --gateway=172.16.86.1 -o parent=eth0 pub_net
Then create a container that will use the above network:
docker run --name web_container --net=pub_net --ip=172.16.86.2 --mac-address 25-EE-4E-B5-21-48 -itd nginx
Now you have a public facing container running on 172.16.86.2 and sure from the same docker image you can run multiple docker container and assign a public IP to each one.

Related

docker - Azure Container Instance - how to make my container accesable and recognized from outside?

I have windows container which should access to external VM database (that is not in container, lets say VM1) so I would define for them l2bridge network driver in order to use the same Virtual Network.
docker network create -d "l2bridge" --subnet 10.244.0.0/24 --gateway 10.244.0.1
-o com.docker.network.windowsshim.vlanid=7
-o com.docker.network.windowsshim.dnsservers="10.244.0.7" my_transparent
So I suppose we need to stick on this definitely.
But now as well I need to make my container accessible from outside, on port 9000, from other containers as well as from other VMs. I suppose this has to be done based on its name (host name) since IP will be changed after the each restart.
How I should make my container accessible from some other VM2 virtual machine - Should I do any modifications within the network configuration? Or I just to make sure they are both using the same DNS server?
Of course I will do the expose of the port, but should I do any kind of additional network configuration in order to allow traffic on that specific port? I've read that by default network traffic is not allowed and that Windows may block some thing.
I will appreciate help on this.
Thanks
It seems you do not use the Azure Container Instance, you just run the container in the Windows VM. If I am right, the best way to make the container accessible outside is to run the container without setting the network, just need to map the port to the host port. Then the container is accessible outside with the exposed port. Here is the example command:
docker run --rm -d -p host_port:container_port --name container_name image
Then you can access the container outside through the public IP and the host port of the VM.
Update:
Ok, if you use the ACI to deploy your containers via docker-compose, then you need to know that ACI does not support the Windows container in VNet. It means Windows containers cannot be created with private IP in the VNet.
And if you use the Linux container, then you can deploy the containers in a VNet, but you need to use the follow the steps here. Then the containers have private IPs and it's accessible in the VNet through that private IPs.

Can I create two ingress in Docker Swarm?

When I run docker host in swarm mode automatically an ingress network is created, now my question is that can I create second ingress network as well with some command or can I create multiple ingress network ?
The ingress network is internally used by docker swarm to route external traffic on a published port to your services running inside the cluster. You do not configure your services to use this network directly, instead you just publish a port and docker handles the rest.
For communication between your services, that needs to be done with a user defined overlay network. With a compose file and a docker stack deploy command, this is done by default. Without a compose file, you would create a network with the overlay driver, e.g. docker network create -d overlay appnet and then connect your services to that network. To isolate services from each other, you would add them to separate overlay networks (note the ingress network does not allow communication between services, it is strictly for north/south traffic in networking terms).

Use multiple specific network interfaces with docker swarm

I am using docker swarm to deploy an on-premise third party application. The machine I am deploying on is running RHEL 7.6 and has two network interfaces. The users will interact with the application from eth0, but internal communication with their system must use eth1 or the connection will be blocked by their firewalls. My application requires some of my services to establish connections internal services in their network.
I created my swarm using:
$ docker swarm init --advertise-addr x.x.x.x
Where x.x.x.x is the eth0 inet address. This works for incoming user traffic to the service. However, when I try to establish connections to another service the connection times out, blocked by the firewall.
Outside of docker, on the machine, I can run:
ssh -b y.y.y.y user#server
Where y.y.y.y is the eth1 inet address, and it works. When I run the same in my docker swarm container I get this error:
bind: y.y.y.y: Cannot assign requested address
Is there some way I can use multiple network interfaces with docker swarm and specify which one is used within containers? I couldn't find much documentation on this. Do I need to set up some sort of proxy?
By default, Docker creates a bridged network from the network interfaces on your machine and then attaches each container instance to it via a separate virtual interface internally. So you would not be able to bind to eth1 directly as your Docker container has a different interface.
See https://docs.docker.com/v17.09/engine/userguide/networking/ for a general guide and https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/ for custom bridged networks with multiple physical interfaces.

Unable to communicate between the docker containers from the same subnet after using CNI to assing IPs from azure vnet

I was trying to give IP address to docker container from an azure vnet using the CNI plugin, I was successfully able to give IP address to containers but they were not communicating with each other.
I followed this blog, which is based on this microsoft document.
I created two containers using commands:
sudo ./docker-run.sh alpine1 default alpine
and
sudo ./docker-run.sh alpine2 default alpine
I checked that the IP of alpine1 is 10.10.3.59 and IP of alpine2 is 10.10.3.61 which are the IPs I created inside a network interface as given in the above docs. So they did receive the IPs from a subnet inside vnet but when I ping alpine1 from alpine2 as ping 10.10.3.59, it doesn't work, am I missing something here ? Or I have to do some other configuration after this ?

Networking setup in docker

If I have a network setup in Docker, as follows
Here MQTT and Nodejs are two separate docker containers.
Do I also have to use TLS for securing channel A? Do I even have to secure channel A? I guess no, as it is just a container to container channel. Am I correct?
I am a newbie as far as Docker is concerned but I've read that docker0 will allow containers to intercommunicate but prevent outside world from connecting to containers, until a port is mapped from the host to a container.
It's not clear fro the question if the nodejs and mqtt broker are in the same docker container or in 2 separate containers, but...
The interlink between 2 docker containers for a port not mapped to the host will be on the internal virtual network, the only way to sniff that traffic will be from the host machine so as long as the host machine is secure that traffic should be secure with out the need to run a SSL/TLS listener.
If both the nodejs app and the broker are within the same docker container then they can communicate over localhost so not even over the virtual network interconnect so again not need to add SSL/TLS.
I wouldn't worry about the traffic unless the host system has potential hostile containers or users... but sometimes my imagination fails me.
You can use the deprecated --link flag to keep the port from being mapped to the host (IIUC):
docker run --name mqtt mqtt:latest & docker run --name nodejs --link mqtt nodehs:latest
Again, IIUC, this creates a private network between the two containers...
Evidence of that is a netstat -an |grep EST does not show connections between containers that are --link'd this way, even if one the target port is open to the world

Resources