I am running a python application which is running inside a docker container using Beanstalk in a private subnet, and I want to get the Private/Local IP of the EC2 instance. Is it possible to get the Local IP address without using curl http://169.254.169.254/latest/meta-data/local-ipv4 inside a docker container.
Although I tried docker run --net=host <image_name> but still its not accessible.
You can get the local ip of a Linux instance with this command:
hostname -I | awk '{print $1}'
For EB, use .ebexentions and write a bash script in a script_name.config to run: export HOST_IP=$(hostname -I | awk '{print $1}')
I have host with PostgreSQL and Docker container. PostgreSQL work on 5432 port. Docker container must connect to database. How to connect container with database through Dockerfile or run command? EXPOSE 5432 and docker run -p 5432:5432 ... did not help.
From the documentation page:
Sometimes you need to connect to the Docker host from within your
container. To enable this, pass the Docker host’s IP address to the
container using the --add-host flag. To find the host’s address, use
the ip addr show command.
$ HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print \$2}' | cut -d / -f 1`
$ docker run --add-host=docker:${HOSTIP} --rm -it busybox telnet docker 5432
EXPOSE or -p flag work the other way around e.g. publish container ports to host which you don't want here.
I am working on Linux machine and I wrote an script to pass local host IP address to docker container by passing an parameter It works fine for ubuntu.
will the same script run on mac OS and work as expected (pass IP address of local host to docker container)?
docker run -t -i -e "DOCKER_HOST=$(ip -4 addr show eth0 | grep -Po 'inet \K[\d.]+')" $IMAGE_NAME
On OSX use this command line:
docker run -it -e "DOCKER_HOST=$(ifconfig en0 | awk '/ *inet /{print $2}')" $IMAGE_NAME
On mac, you will be using a VM, so you might want to pass the IP of the docker machine you have declared:
(image from "docker on Mac OS X")
eval $(docker-machine env default)
I'm perfectly happy with the IP range that docker is giving me by default 176.17.x.x, so I don't need to create a new bridge, I just want to give my containers a static address within that range so I can point client browsers to it directly.
I tried using
RUN echo "auto eth0" >> /etc/network/interfaces
RUN echo "iface eth0 inet static" >> /etc/network/interfaces
RUN echo "address 176.17.0.250" >> /etc/network/interfaces
RUN echo "netmask 255.255.0.0" >> /etc/network/interfaces
RUN ifdown eth0
RUN ifup eth0
from a Dockerfile, and it properly populated the interfaces file, but the interface itself didn't change. In fact, running ifup eth0 within the container gets this error:
RTNETLINK answers: Operation not permitted Failed to bring up eth0
I have already answered this here
https://stackoverflow.com/a/35359185/4094678
but I see now that this question is actually older then the aforementioned one, so I'll copy the answer as well:
Easy with Docker version 1.10.1, build 9e83765.
First you need to create you own docker network (mynet123)
docker network create --subnet=172.18.0.0/16 mynet123
than simply run the image (I'll take ubuntu as example)
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
then in ubuntu shell
ip addr
Additionally you could use
--hostname to specify a hostname
--add-host to add more entries to /etc/hosts
Docs (and why you need to create a network) at https://docs.docker.com/engine/reference/commandline/network_create/
I'm using the method written here from the official Docker documentation and I have confirmed it works:
# At one shell, start a container and
# leave its shell idle and running
$ sudo docker run -i -t --rm --net=none base /bin/bash
root#63f36fc01b5f:/#
# At another shell, learn the container process ID
# and create its namespace entry in /var/run/netns/
# for the "ip netns" command we will be using below
$ sudo docker inspect -f '{{.State.Pid}}' 63f36fc01b5f
2778
$ pid=2778
$ sudo mkdir -p /var/run/netns
$ sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid
# Check the bridge's IP address and netmask
$ ip addr show docker0
21: docker0: ...
inet 172.17.42.1/16 scope global docker0
...
# Create a pair of "peer" interfaces A and B,
# bind the A end to the bridge, and bring it up
$ sudo ip link add A type veth peer name B
$ sudo brctl addif docker0 A
$ sudo ip link set A up
# Place B inside the container's network namespace,
# rename to eth0, and activate it with a free IP
$ sudo ip link set B netns $pid
$ sudo ip netns exec $pid ip link set dev B name eth0
$ sudo ip netns exec $pid ip link set eth0 up
$ sudo ip netns exec $pid ip addr add 172.17.42.99/16 dev eth0
$ sudo ip netns exec $pid ip route add default via 172.17.42.1
Using this approach I run my containers always with net=none and set IP addresses with an external script.
Actually, despite my initial failure, #MarkO'Connor's answer was correct. I created a new interface (docker0) in my host /etc/network/interfaces file, ran sudo ifup docker0 on the host, and then ran
docker run --net=host -i -t ...
which picked up the static IP and assigned it to docker0 in the container.
Thanks!
This worked for me:
docker run --cap-add=NET_ADMIN -d -it myimages/image1 /bin/sh -c "/sbin/ip addr add 172.17.0.8 dev eth0; bash"
Explained:
--cap-add=NET_ADMIN have rights for administering the net (i.e. for the /sbin/ip command)
myimages/image1 image for the container
/bin/sh -c "/sbin/ip addr add 172.17.0.8 dev eth0 ; bash"
Inside the container run ip addr add 172.17.0.8 dev eth0 to add a new ip address 172.17.0.8 to this container (caution: do use a free ip address now and in the future). Then run bash, just to not have the container automatically stopped.
Bonus:
My target scene: setup a distributed app with containers playing different roles in the dist-app. A "conductor container" is able to run docker commands by itself (inside) so to start and stop containers as needed.
Each container is configured to know where to connect to access a particular role/container in the dist-app (so the set of ip's for each role must be known by each partner).
To do this:
"conductor container"
image created with this Dockerfile
FROM pin3da/docker-zeromq-node
MAINTAINER Foobar
# install docker software
RUN apt-get -yqq update && apt-get -yqq install docker.io
# export /var/run/docker.sock so we can connect it in the host
VOLUME /var/run/docker.sock
image build command:
docker build --tag=myimages/conductor --file=Dockerfile .
container run command:
docker run -v /var/run/docker.sock:/var/run/docker.sock --name=conductor1 -d -it myimages/conductor bash
Run containers with different roles.
First (not absolutely necessary) add entries to /etc/hosts to locate partners by ip or name (option --add-host)
Second (obviously required) assign a ip to the running container (use
/sbin/ip in it)
docker run --cap-add=NET_ADMIN --add-host worker1:172.17.0.8 --add-host worker2:172.17.0.9 --name=worker1 -h worker1.example.com -d -it myimages/image1 /bin/sh -c "/sbin/ip addr add 172.17.0.8 dev eth0; bash"
Docker containers by default do not have sufficient privileges to manipulate the network stack. You can try adding --cap-add=NET_ADMIN to the run command to allow this specific capability. Or you can try --privileged=true (grants all rights) for testing.
Another option is to use pipework from the host.
Setup your own bridge (e.g br0)
Start docker with: -b=br0
& with pipework (192.168.1.1 below being the default gateway ip address):
pipework br0 container-name 192.168.1.10/24#192.168.1.1
Edit: do not start with --net=none : this closes container ports.
See further notes
I understood that you are not looking at multi-host networking of containers at this stage, but I believe you are likely to need it soon. Weave would allow you to first define multiple container networks on one host, and then potentially move some containers to another host without loosing the static IP you have assigned to it.
pipework also great, but If you can use hostname other than ip then you can try this script
#!/bin/bash
# This function will list all ip of running containers
function listip {
for vm in `docker ps|tail -n +2|awk '{print $NF}'`;
do
ip=`docker inspect --format '{{ .NetworkSettings.IPAddress }}' $vm`;
echo "$ip $vm";
done
}
# This function will copy hosts file to all running container /etc/hosts
function updateip {
for vm in `docker ps|tail -n +2|awk '{print $NF}'`;
do
echo "copy hosts file to $vm";
docker exec -i $vm sh -c 'cat > /etc/hosts' < /tmp/hosts
done
}
listip > /tmp/hosts
updateip
You just need to run this command everytime you boot up your docker labs
You can find my scripts with additional function here dockerip
For completeness: there's another method suggested on the Docker forums. (Edit: and mentioned in passing by the answer from Андрей Сердюк).
Add the static IP address on the host system, then publish ports to that ip, e.g. docker run -p 192.0.2.1:80:80 -d mywebserver.
Of course that syntax won't work for IPv6 and the documentation doesn't mention that...
It sounds wrong to me: the usual wildcard binds (*:80) on the host theoretically conflict with the container. In practice the Docker port takes precedence and doesn't conflict, because of how it's implemented using iptables. But your public container IP will still respond on all the non-conflicting ports, e.g. ssh.
I discovered that --net=host might not always be the best option, as it might allow users to shut down the host from the container! In any case, it turns out that the reason I couldn't properly do it from inside was because network configuration was designed to be restricted to sessions that begun with the --privileged=true argument.
You can set up SkyDns with service discovery tool - https://github.com/crosbymichael/skydock
Or: Simply create network interface and publish docker container ports in it like here https://gist.github.com/andreyserdjuk/bd92b5beba2719054dfe
I would like to make my docker containers aware of their configuration, the same way you can get information about EC2 instances through metadata.
I can use (provided docker is listening on port 4243)
curl http://172.17.42.1:4243/containers/$HOSTNAME/json
to get some of its data, but would like to know if there is a better way at least the get the full ID of the container, because HOSTNAME is actually shortened to 12 characters and docker seems to perform a "best match" on it.
Also, how can I get the external IP of the docker host (other than accessing the EC2 metadata, which is specific to AWS)
Unless overridden, the hostname seems to be the short container id in Docker 1.12
root#d2258e6dec11:/project# cat /etc/hostname
d2258e6dec11
Externally
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2258e6dec11 300518d26271 "bash" 5 minutes ago
$ docker -v
Docker version 1.12.0, build 8eab29e, experimental
I've found out that the container id can be found in /proc/self/cgroup
So you can get the id with :
cat /proc/self/cgroup | grep -o -e "docker-.*.scope" | head -n 1 | sed "s/docker-\(.*\).scope/\\1/"
A comment by madeddie looks most elegant to me:
CID=$(basename $(cat /proc/1/cpuset))
You can communicate with docker from inside of a container using unix socket via Docker Remote API:
https://docs.docker.com/engine/reference/api/docker_remote_api/
In a container, you can find out a shortedned docker id by examining $HOSTNAME env var.
According to doc, there is a small chance of collision, I think that for small number of container, you do not have to worry about it. I don't know how to get full id directly.
You can inspect container similar way as outlined in banyan answer:
GET /containers/4abbef615af7/json HTTP/1.1
Response:
HTTP/1.1 200 OK
Content-Type: application/json
{
"Id": "4abbef615af7...... ",
"Created": "2013.....",
...
}
Alternatively, you can transfer docker id to the container in a file.
The file is located on "mounted volume" so it is transfered to container:
docker run -t -i -cidfile /mydir/host1.txt -v /mydir:/mydir ubuntu /bin/bash
The docker id (shortened) will be in file /mydir/host1.txt in the container.
This will get the full container id from within a container:
cat /proc/self/cgroup | grep "cpu:/" | sed 's/\([0-9]\):cpu:\/docker\///g'
WARNING: You should understand the security risks of this method before you consider it. John's summary of the risk:
By giving the container access to /var/run/docker.sock, it is [trivially easy] to break out of the containment provided by docker and gain access to the host machine. Obviously this is potentially dangerous.
Inside the container, the dockerId is your hostname.
So, you could:
install the docker-io package in your container with the same version as the host
start it with --volume
/var/run/docker.sock:/var/run/docker.sock --privileged
finally, run: docker inspect $(hostname) inside the container
Avoid this. Only do it if you understand the risks and have a clear mitigation for the risks.
To make it simple,
Container ID is your host name inside docker
Container information is available inside /proc/self/cgroup
To get host name,
hostname
or
uname -n
or
cat /etc/host
Output can be redirected to any file & read back from application
E.g.: # hostname > /usr/src//hostname.txt
I've found that in 17.09 there is a simplest way to do it within docker container:
$ cat /proc/self/cgroup | head -n 1 | cut -d '/' -f3
4de1c09d3f1979147cd5672571b69abec03d606afcc7bdc54ddb2b69dec3861c
Or like it has already been told, a shorter version with
$ cat /etc/hostname
4de1c09d3f19
Or simply:
$ hostname
4de1c09d3f19
Docker sets the hostname to the container ID by default, but users can override this with --hostname. Instead, inspect /proc:
$ more /proc/self/cgroup
14:name=systemd:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
13:pids:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
12:hugetlb:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
11:net_prio:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
10:perf_event:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
9:net_cls:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
8:freezer:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
7:devices:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
6:memory:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
5:blkio:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
4:cpuacct:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
3:cpu:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
2:cpuset:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
1:name=openrc:/docker
Here's a handy one-liner to extract the container ID:
$ grep "memory:/" < /proc/self/cgroup | sed 's|.*/||'
7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
Some posted solutions have stopped working due to changes in the format of /proc/self/cgroup. Here is a single GNU grep command that should be a bit more robust to format changes:
grep -o -P -m1 'docker.*\K[0-9a-f]{64,}' /proc/self/cgroup
For reference, here are snippits of /proc/self/cgroup from inside docker containers that have been tested with this command:
Linux 4.4:
11:pids:/system.slice/docker-cde7c2bab394630a42d73dc610b9c57415dced996106665d427f6d0566594411.scope
...
1:name=systemd:/system.slice/docker-cde7c2bab394630a42d73dc610b9c57415dced996106665d427f6d0566594411.scope
Linux 4.8 - 4.13:
11:hugetlb:/docker/afe96d48db6d2c19585572f986fc310c92421a3dac28310e847566fb82166013
...
1:name=systemd:/docker/afe96d48db6d2c19585572f986fc310c92421a3dac28310e847566fb82166013
I believe that the "problem" with all of the above is that it depends upon a certain implementation convention either of docker itself or its implementation and how that interacts with cgroups and /proc, and not via a committed, public, API, protocol or convention as part of the OCI specs.
Hence these solutions are "brittle" and likely to break when least expected when implementations change, or conventions are overridden by user configuration.
container and image ids should be injected into the r/t environment by the component that initiated the container instance, if for no other reason than
to permit code running therein to use that information to uniquely identify themselves for logging/tracing etc...
just my $0.02, YMMV...
There are 3 places that I see that might work so far, each have advantages & disadvantages:
echo $HOSTNAME or hostname
cat /proc/self/cgroup
cat /proc/self/mountinfo
$HOSTNAME is easy, but it is partial, and it will also be overwritten to pod name by K8s.
/proc/self/cgroup seems working with cgroupV1 but won't be there hosted in cgroupV2.
/proc/self/mountinfo will still have the container id for cgroupV2, however, the mount point will have different values by different container runtimes.
For example, in docker engine, the value looks like:
678 655 254:1 /docker/containers/7a0144cee1256c539fab790199527b7051aff1b603ebcf7ed3fd436440ef3b3a/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/vda1 rw
In ContainerD (K8s default engine lately), it looks like:
1733 1729 0:35 /kubepods/besteffort/pod3272f253-be44-4a82-a541-9083e68cf99f/7a0144cee1256c539fab790199527b7051aff1b603ebcf7ed3fd436440ef3b3a /sys/fs/cgroup/blkio ro,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,blkio
Also, the biggest problem for all above is that they are all implementations, there's no abstraction and they all could be changed over time.
There is an effort to make it standard and I think it is worth watching:
https://github.com/opencontainers/runtime-spec/issues/1105
You can use this command line to identify the current container ID (tested with docker 1.9).
awk -F"-|/." '/1:/ {print $3}' /proc/self/cgroup
Then, a little request to Docker API (you can share /var/run/docker.sock) to retrieve all informations.
awk -F'[:/]' '(($4 == "docker") && (lastId != $NF)) { lastId = $NF; print $NF; }' /proc/self/cgroup
As an aside, if you have the pid of the container and want to get the docker id of that container, a good way is to use nsenter in combination with the sed magic above:
nsenter -n -m -t pid -- cat /proc/1/cgroup | grep -o -e "docker-.*.scope" | head -n 1 | sed "s/docker-\(.*\).scope/\\1/"
Use docker inspect.
$ docker ps # get conteiner id
$ docker inspect 4abbef615af7
[{
"ID": "4abbef615af780f24991ccdca946cd50d2422e75f53fb15f578e14167c365989",
"Created": "2014-01-08T07:13:32.765612597Z",
"Path": "/bin/bash",
"Args": [
"-c",
"/start web"
],
"Config": {
"Hostname": "4abbef615af7",
...
Can get ip as follows.
$ docker inspect -format="{{ .NetworkSettings.IPAddress }}" 2a5624c52119
172.17.0.24