I want to set the docker container hostname to the machine hostname on which docker is installed. Please note than I want to set the hostname dynamically and don't want to hardcode the machine hostname in my docker run command.
How do I achieve this?
My docker run command:
sudo docker run --name=rabbitmq -d -p 5672:5672 -p 15672:15672 \
-e RABBITMQ_DEFAULT_USER=admin \
-e RABBITMQ_DEFAULT_PASS=admin \
--hostname ?? \
-v rmq_vol:/var/lib/rabbitmq \
rabbitmq:3.9.0
What KamilCuk said.
add to docker run: --hostname $(hostname)
You're just passing in the result of the linux "hostname" command to your docker run configuration.
I am running below command to ingress in aws-cli, it is working fine if I provide an IP address, but I want it to know the IP and pass it.
I was trying something like below but it is not helping
aws ec2 authorize-security-group-ingress --group-id sg-123456778 --protocol tcp --port 22 --cidr echo "$(curl https://checkip.amazonaws.com)/32" --profile xyzzy
If I am doing below then it works but I want it to be done by above way.
IP=`echo "$(curl https://checkip.amazonaws.com)/32"`
aws ec2 authorize-security-group-ingress --group-id sg-123456778 --protocol tcp --port 22 --cidr $IP --profile xyzzy
Use -s with curl.
Try this:
aws ec2 authorize-security-group-ingress --group-id sg-123456778 --protocol tcp --port 22 --cidr $(echo "$(curl -s https://checkip.amazonaws.com)/32") --profile xyzzy
I would like to connect from a php docker, through a OpenVPN docker to a OpenVPN client.
Network structure
I have added a Docker network (192.168.200.0/24)
The php docker has the ip 192.168.200.3
The vpn docker has the ip 192.168.200.2
The configuration of the vpn docker looks like
root#ip-10-8-0-20:/home/ubuntu/docker-compose# cat vpn/openvpn-data/conf/openvpn.conf
server 192.168.255.0 255.255.255.0
verb 3
key /etc/openvpn/pki/private/vpn.***.de.key
ca /etc/openvpn/pki/ca.crt
cert /etc/openvpn/pki/issued/vpn.***.de.crt
dh /etc/openvpn/pki/dh.pem
tls-auth /etc/openvpn/pki/ta.key
key-direction 0
keepalive 10 60
persist-key
persist-tun
proto udp
# Rely on Docker to do port mapping, internally always 1194
port 1194
dev tun0
status /tmp/openvpn-status.log
user nobody
group nogroup
comp-lzo no
### Route Configurations Below
route 192.168.255.0 255.255.255.0
### Push Configurations Below
push "block-outside-dns"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
push "comp-lzo no"
push "route 192.168.200.0 255.255.255.0"
the .env file of the vpn docker looks like
root#ip-10-8-0-20:/home/ubuntu/docker-compose# cat vpn/openvpn-data/conf/ovpn_env.sh
declare -x OVPN_AUTH=
declare -x OVPN_CIPHER=
declare -x OVPN_CLIENT_TO_CLIENT=
declare -x OVPN_CN=vpn.***.de
declare -x OVPN_COMP_LZO=0
declare -x OVPN_DEFROUTE=1
declare -x OVPN_DEVICE=tun
declare -x OVPN_DEVICEN=0
declare -x OVPN_DISABLE_PUSH_BLOCK_DNS=0
declare -x OVPN_DNS=1
declare -x OVPN_DNS_SERVERS=([0]="8.8.8.8" [1]="8.8.4.4")
declare -x OVPN_ENV=/etc/openvpn/ovpn_env.sh
declare -x OVPN_EXTRA_CLIENT_CONFIG=()
declare -x OVPN_EXTRA_SERVER_CONFIG=()
declare -x OVPN_FRAGMENT=
declare -x OVPN_KEEPALIVE='10 60'
declare -x OVPN_MTU=
declare -x OVPN_NAT=0
declare -x OVPN_PORT=1194
declare -x OVPN_PROTO=udp
declare -x OVPN_PUSH=([0]="route 192.168.200.0 255.255.255.0")
declare -x OVPN_ROUTES=([0]="192.168.255.0/24")
declare -x OVPN_SERVER=192.168.255.0/24
declare -x OVPN_SERVER_URL=udp://vpn.***.de
declare -x OVPN_TLS_CIPHER=
So I have created a client config and put it on the local server where the php script needs to connect to. I started the vpn docker sucessfully and the server has the vpn Ip 192.168.255.1. I started the vpn connection and it connected correctly on the local server. It gets the vpn ip 192.168.255.6.
I can ping from vpn docker to the local server and return. That works.
After that I added a route on php docker:
ip route add 192.168.255.0/24 via 192.168.200.3
I can ping 192.168.255.1 from php docker sucessfully but not 192.168.255.6 (local server)
So I have checked the forwarding in vpn docker:
So I thought I have to add an iptables rule
iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
But still won't work. Then I thought I have to add another ip table rule
iptables -A FORWARD -p tcp -i eth1 -o tun0 --match multiport --dports=80,443 -m conntrack --ctstate=NEW -j ACCEPT
I want to call a website through port 80 from php docker on local server but it still won't work.
I don't know what I am missing. Could you help me to find the problem?
I'm trying to make a Docker container accessible on e.g. 1.2.3.4:9999:99 from the Internet (so from outside the container) to be seen as the same IP from inside so when I'm inside the container and doing curl http://bot.whatismyipaddress.com/ I would get 1.2.3.4. I'm struggling with it for hours and no progress.
I'm running the container with docker run --name public254 -d -p 123.456.789.254:22:22 some-image:latest and it's accessible through 123.456.789.254 indeed. When inside it's seen as the main IP of the host as it's supposed to.
Now I want to modify this. What should I do next?
Well. I did it.
Enable forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
Find out container's internal IP
docker inspect -f '{{ .NetworkSettings.IPAddress }}' some_container
Route it correctly
iptables -t nat -I POSTROUTING -p all -s <container internal IP> -j SNAT --to-source <container external IP>
I'm perfectly happy with the IP range that docker is giving me by default 176.17.x.x, so I don't need to create a new bridge, I just want to give my containers a static address within that range so I can point client browsers to it directly.
I tried using
RUN echo "auto eth0" >> /etc/network/interfaces
RUN echo "iface eth0 inet static" >> /etc/network/interfaces
RUN echo "address 176.17.0.250" >> /etc/network/interfaces
RUN echo "netmask 255.255.0.0" >> /etc/network/interfaces
RUN ifdown eth0
RUN ifup eth0
from a Dockerfile, and it properly populated the interfaces file, but the interface itself didn't change. In fact, running ifup eth0 within the container gets this error:
RTNETLINK answers: Operation not permitted Failed to bring up eth0
I have already answered this here
https://stackoverflow.com/a/35359185/4094678
but I see now that this question is actually older then the aforementioned one, so I'll copy the answer as well:
Easy with Docker version 1.10.1, build 9e83765.
First you need to create you own docker network (mynet123)
docker network create --subnet=172.18.0.0/16 mynet123
than simply run the image (I'll take ubuntu as example)
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
then in ubuntu shell
ip addr
Additionally you could use
--hostname to specify a hostname
--add-host to add more entries to /etc/hosts
Docs (and why you need to create a network) at https://docs.docker.com/engine/reference/commandline/network_create/
I'm using the method written here from the official Docker documentation and I have confirmed it works:
# At one shell, start a container and
# leave its shell idle and running
$ sudo docker run -i -t --rm --net=none base /bin/bash
root#63f36fc01b5f:/#
# At another shell, learn the container process ID
# and create its namespace entry in /var/run/netns/
# for the "ip netns" command we will be using below
$ sudo docker inspect -f '{{.State.Pid}}' 63f36fc01b5f
2778
$ pid=2778
$ sudo mkdir -p /var/run/netns
$ sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid
# Check the bridge's IP address and netmask
$ ip addr show docker0
21: docker0: ...
inet 172.17.42.1/16 scope global docker0
...
# Create a pair of "peer" interfaces A and B,
# bind the A end to the bridge, and bring it up
$ sudo ip link add A type veth peer name B
$ sudo brctl addif docker0 A
$ sudo ip link set A up
# Place B inside the container's network namespace,
# rename to eth0, and activate it with a free IP
$ sudo ip link set B netns $pid
$ sudo ip netns exec $pid ip link set dev B name eth0
$ sudo ip netns exec $pid ip link set eth0 up
$ sudo ip netns exec $pid ip addr add 172.17.42.99/16 dev eth0
$ sudo ip netns exec $pid ip route add default via 172.17.42.1
Using this approach I run my containers always with net=none and set IP addresses with an external script.
Actually, despite my initial failure, #MarkO'Connor's answer was correct. I created a new interface (docker0) in my host /etc/network/interfaces file, ran sudo ifup docker0 on the host, and then ran
docker run --net=host -i -t ...
which picked up the static IP and assigned it to docker0 in the container.
Thanks!
This worked for me:
docker run --cap-add=NET_ADMIN -d -it myimages/image1 /bin/sh -c "/sbin/ip addr add 172.17.0.8 dev eth0; bash"
Explained:
--cap-add=NET_ADMIN have rights for administering the net (i.e. for the /sbin/ip command)
myimages/image1 image for the container
/bin/sh -c "/sbin/ip addr add 172.17.0.8 dev eth0 ; bash"
Inside the container run ip addr add 172.17.0.8 dev eth0 to add a new ip address 172.17.0.8 to this container (caution: do use a free ip address now and in the future). Then run bash, just to not have the container automatically stopped.
Bonus:
My target scene: setup a distributed app with containers playing different roles in the dist-app. A "conductor container" is able to run docker commands by itself (inside) so to start and stop containers as needed.
Each container is configured to know where to connect to access a particular role/container in the dist-app (so the set of ip's for each role must be known by each partner).
To do this:
"conductor container"
image created with this Dockerfile
FROM pin3da/docker-zeromq-node
MAINTAINER Foobar
# install docker software
RUN apt-get -yqq update && apt-get -yqq install docker.io
# export /var/run/docker.sock so we can connect it in the host
VOLUME /var/run/docker.sock
image build command:
docker build --tag=myimages/conductor --file=Dockerfile .
container run command:
docker run -v /var/run/docker.sock:/var/run/docker.sock --name=conductor1 -d -it myimages/conductor bash
Run containers with different roles.
First (not absolutely necessary) add entries to /etc/hosts to locate partners by ip or name (option --add-host)
Second (obviously required) assign a ip to the running container (use
/sbin/ip in it)
docker run --cap-add=NET_ADMIN --add-host worker1:172.17.0.8 --add-host worker2:172.17.0.9 --name=worker1 -h worker1.example.com -d -it myimages/image1 /bin/sh -c "/sbin/ip addr add 172.17.0.8 dev eth0; bash"
Docker containers by default do not have sufficient privileges to manipulate the network stack. You can try adding --cap-add=NET_ADMIN to the run command to allow this specific capability. Or you can try --privileged=true (grants all rights) for testing.
Another option is to use pipework from the host.
Setup your own bridge (e.g br0)
Start docker with: -b=br0
& with pipework (192.168.1.1 below being the default gateway ip address):
pipework br0 container-name 192.168.1.10/24#192.168.1.1
Edit: do not start with --net=none : this closes container ports.
See further notes
I understood that you are not looking at multi-host networking of containers at this stage, but I believe you are likely to need it soon. Weave would allow you to first define multiple container networks on one host, and then potentially move some containers to another host without loosing the static IP you have assigned to it.
pipework also great, but If you can use hostname other than ip then you can try this script
#!/bin/bash
# This function will list all ip of running containers
function listip {
for vm in `docker ps|tail -n +2|awk '{print $NF}'`;
do
ip=`docker inspect --format '{{ .NetworkSettings.IPAddress }}' $vm`;
echo "$ip $vm";
done
}
# This function will copy hosts file to all running container /etc/hosts
function updateip {
for vm in `docker ps|tail -n +2|awk '{print $NF}'`;
do
echo "copy hosts file to $vm";
docker exec -i $vm sh -c 'cat > /etc/hosts' < /tmp/hosts
done
}
listip > /tmp/hosts
updateip
You just need to run this command everytime you boot up your docker labs
You can find my scripts with additional function here dockerip
For completeness: there's another method suggested on the Docker forums. (Edit: and mentioned in passing by the answer from Андрей Сердюк).
Add the static IP address on the host system, then publish ports to that ip, e.g. docker run -p 192.0.2.1:80:80 -d mywebserver.
Of course that syntax won't work for IPv6 and the documentation doesn't mention that...
It sounds wrong to me: the usual wildcard binds (*:80) on the host theoretically conflict with the container. In practice the Docker port takes precedence and doesn't conflict, because of how it's implemented using iptables. But your public container IP will still respond on all the non-conflicting ports, e.g. ssh.
I discovered that --net=host might not always be the best option, as it might allow users to shut down the host from the container! In any case, it turns out that the reason I couldn't properly do it from inside was because network configuration was designed to be restricted to sessions that begun with the --privileged=true argument.
You can set up SkyDns with service discovery tool - https://github.com/crosbymichael/skydock
Or: Simply create network interface and publish docker container ports in it like here https://gist.github.com/andreyserdjuk/bd92b5beba2719054dfe