Is that possible to have multiple IPs on eth0 in a Docker container?
I would like having 5 IPs on eth0 in a Docker container interface. I am using "ip" utility. Executing ip address add 172.20.0.200/16 dev eth0 in the container give "Operation not permited.
I tried manually log to the container as root user using "sudo exec
-u root ..".
I have even tried apt-install sudo in the container. Result is same "Operation not permitted"
I have found the answer. There must be added --cap-add option
docker run --cap-add=NET_ADMIN image
But as I understand docker know nothing about static IPs since docker network inspect shows only one IP. Hence think of using custom network
Related
I followed this turorial: https://learn.microsoft.com/en-us/azure/container-instances/tutorial-docker-compose
But when I set up my project there's no default network created.
Anybody know how to solve this?
The default network should be bridge, as can be seen with
$ docker network ls
Whenever you start a container without providing --network flag your container is connected to the default bridge network by default. You can provide the --network bridge flag in your docker run command.
Alternatively, you can connect a container to a network using
$ docker network connect <network> <container>
I am working on a project that requires me to create a virtual CAN network on my host machine
$ sudo modprobe vcan
$ sudo ip link add dev vcan0 type vcan
$ sudo ip link set up vcan0
My ifconfig :
My question is how can I share this interface with my docker container.
If its of any use I ran the following command find / -name "vcan0" -print 2>/dev/null on my host machine :
/sys/class/net/vcan0
/sys/devices/virtual/net/vcan0
/proc/sys/net/ipv4/conf/vcan0
/proc/sys/net/ipv4/neigh/vcan0
I can run the Docker container using docker run --rm -it --network=host ... . The only problem is there is no network isolation b/w docker host and containers anymore. Is there a way to achieve the above but without sharing the host network ?
I haven't found a way to share a CAN network interface with a Docker container without --network=host, but there is a possible workaround. You can use a CAN-UDP bridge, like canneloni or can2udp, to send CAN frames over UDP.
I've used this in the past to connect a physical CAN bus on a remote device to a virtual CAN interface on my laptop. But it should work just as well for a Docker container.
One drawback is that you do have to create a vcan interface in the container. Which requires you to run the container in privileged mode.
You can use --cap-add=NET_ADMIN when you run docker image. This will allow you to create inside container:
$ sudo ip link add dev vcan0 type vcan
$ sudo ip link set up vcan0
Of course vcan driver is loaded on host.
I've written up a blog post that should get you most of the way there. At a high level you need to create a vxcan link and move one end of it into your docker container. Then you can forward traffic from your vcan interface to one end of the vxcan interface, and it will be transmitted to the vxcan inside the container. You'll just need to use the correct kernel headers package and in the final cangw step you'll need to specific vcan0 instead of can0.
https://www.lagerdata.com/blog/forwarding-can-bus-traffic-to-a-docker-container-using-vxcan-on-raspberry-pi
I create two service, service-a(3 replies) and service-b(5 replies).
They are in micro overlay network.
I want to get all container ip from dns.
# docker run --rm --network micro alpine nslookup service-a
But only get one ip. Is there has anyway to get all IPs address of some service using dns?
Use tasks.<service-name> to get all containers belonging to particular service.
You can use docker inspect to get the virtual IP of a service:
NETWORK_ID=$(docker network ls -q --no-trunc --filter name=micro) && docker service inspect service-a -f "{{range \$i, \$value := .Endpoint.VirtualIPs}} {{if eq \$value.NetworkID \"$NETWORK_ID\" }}{{$value.Addr}}{{end}}{{end}}"
This one line command finds the network ID then use it in a docker inspect to get only the services virtual IPs in this network, using the -f (format) parameter
At the moment I'm running a node.js application inside a docker container which needs to connect to camunda, which runs in another container.
I start the containers with the following command
docker run -d --restart=always --name camunda -p 8000:8080 camunda/camunda-bpm-platform:tomcat-7.4.0
docker run -d --name app -p 3000:3000 app
Both applications are now running and I can access camunda by navigating to my host's IP on port 8000, and running wget http://localhost:8000 -q -O - also returns the camunda page. When I login to my app container with docker exec -it app sh and type wget http://localhost:8000 -q -O -, I cannot access camunda. Instead I get the following error:
wget: can't connect to remote host (127.0.0.1): Connection refused
When I link my app container to the camunda container with --link camunda:camunda, and type wget http://camunda:8000 -q -O - in my app container, I get the following error:
wget: can't connect to remote host (172.17.0.4): Connection refused`
I've seen this option, so I started my app container with --add-host camunda:my_hosts_ip and tried wget again, resulting in:
wget: can't connect to remote host (149.210.227.191): Operation timed out
When running wget http://149.210.227.191:5001 -q -O - on my host machine however, I get a correct response immediately.
Ideally I would like to just start my app container without the need to supply the external IP in any way, and let the app container just use the camunda service via the localhost or by linking the camunda container tot my app container. What would be the easiest way to achieve this?
Why does it not work?
Containers and host do not share their local IP stack. Thus, when you are within a container and try anything localhost:port the anything command will try to connect to the container-specific local IP stack, not the other container nor the host.
How to make it work?
Hard way: you either need to know the IP address of the other container and connect to this IP address..
Easier and cleaner way: .. either link your containers.
--link=[]
Add link to another container in the form of <name or id>:alias or just <name or id> in which case the alias will match the name
So you'll need to perform, assuming the camunda container is named camunda:
docker run -d --name app -p 3000:3000 --link camunda app
Then, once you docker-exec-ed into the container app you will be able to execute wget http://camunda:8080 -q -O - without error.
Note that while the linked containers graph cannot loop, e.g., camunda cannot be linked to app as you need to start a container to be able to link it, you actually do whatever you want/need playing with IP addresses.
Note also that you can specify the IP address of a container using the --ip option (though it can only be used in conjunction with --net for user-defined networks).
Original answer below. Note that link has been deprecated and the recommended replacement is network. That is explained in the answer to this question: docker-compose: difference between network and link
--
Use the --link camunda:camunda option for your app container. Then you can access camunda via http://camunda:8080/.... The link option adds a entry to the /etc/hosts file of the app container with the IP address of the camunda container. This also means you have to restart your app container if you restart the camunda container.
Im a trying to deploy my application using Docker and came across an issue that restarting named containers assigns a different IP to container. Maybe explaining what I am doing will better explain the issue:
Postgres runs inside a separate container named "postgres"
$ PG_ID=$(docker run --name postgres postgres/image)
My webapp container links to postgres container
$ APP_ID=$(docker run --link postgres:postgres webapp/image)
Linking postgres container image to webapp container inserts in webapp container a hosts file entry with the IP of the postgres container. This allows me to point to postgres db within my webapp using postgres:5432 (I am using Django btw). This all works well except if for some reason postgres crashes.
Before I manually stop postgres process to simulate postgres process crashing I verify IP of postgres container:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.73
Now to simulate crash I stop postgres container:
$ docker stop $PG_ID
If now I restart postgres by using
$ docker start $PG_ID
the ip of the container changes:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.74
Therefore the IP which points to postgres container in webapp container is no longer correct. I though that by naming container docker assigns a name to it with specific configs so that you can reliably link between containers (both network and volumes). If the IP changes this seems to defeat the purpose.
If I have to restart my webapp process each time I postgres restarts, this does not seem any better than just using a single container to run both processes. Then I can use supervisor or something similar to keep both of them running and use localhost to link between processes.
I am still new to Docker so am I doing something wrong or is this a bug in docker?
2nd UPDATE: maybe you already discovered this, but as workaround, I plan to map the service to share the database to the host interface (ej: with -p 5432:5432), and connect the webapps to the host IP (the IP of the docker0 interface: in my Ubuntu and CentOS, the IP is 172.17.42.1). If you restart the postgres container, the conteiner's IP will change, but I wil be accesible using 172.17.42.1:5432. The downside is that you are exposing that port to all the containers, and loose the fine-grained mapping that --link gives you.
--- OLD UPDATES:
CORRECTION: Docker will map 'postgres' to the container's IP in the /etc/hosts files, on the webapp container. So, in the webapp container, you can ping 'postgres', and it will be mapped to the IP.
1st UPDATE: I've seen that Docker generates and mounts /etc/hosts, /etc/resolv.conf, etc. to have always the correct information, but this does not apply when the linked container is restarted. So, I've assumed (wrongly) that Docker would update the hosts files.
-- ORIGINAL (wrong) response:
Add --hostname=postgres-db (you can use anythin, I'm using something different than 'postgres' to avoid confussion with the container name):
$ docker run --name postgres --hostname postgres-db postgres/image
Docker will map 'postgres-db' to the container's IP (check the contents of /etc/hosts on the webapp container).
This will allow you run 'ping postgres-db' from the webapp container. If the IP changes, Dockers will update /etc/hosts for you.
In the Django app, use 'postgres-db' instead of the IP (or whatever you use for --hostname of the container with PostgreSql).
Bye!
Horacio
According to https://docs.docker.com/engine/reference/commandline/run/, it should be possible to assign a static IP for your container -- at the time of container creation -- using the --ip option:
Example:
docker run -itd --ip 172.30.100.104 --name postgres postgres/image
....where 172.30.100.104 is a free IP address on a custom bridge/overlay network.
This should then retain the same IP address even if postgres container crashes/restarts.
Looks like this was released in Docker Engine v 1.10 or greater, therefore if you have a lower version, you have to upgrade first.
As of Docker 1.0 they implemented a stronger sense of linked containers. Now you can use the container instance name as if it were the host name.
Here is a link
I found a link that better describes your problem. And while that question was answered I wonder whether or not this ambassador pattern might not solve the problem... this assumes that the ambassador is more reliable than the services that link.