Docker host config has wrong IP address - linux

I'm trying to install Docker on Raspbian but it seems to have picked up an old config from somewhere. No idea where from, as I can't find any references anywhere.
I have installed Docker on Raspbian using sudo apt-get install docker-ce.
When I try to connect to Docker, it tries to connect to the wrong IP address (192.168.1.75 when it should be 192.168.1.227).
$ docker ps
error during connect: Get http://192.168.1.75:2376/v1.38/containers/json: dial tcp 192.168.1.75:2376: connect: no route to host
The server used to be on 192.168.1.75 but is now on 192.168.1.227.
$ ifconfig eth0
eth0 Link encap:Ethernet HWaddr b8:27:eb:50:b4:16
inet addr:192.168.1.227 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:42704 errors:0 dropped:0 overruns:0 frame:0
TX packets:61093 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6278037 (5.9 MiB) TX bytes:80578119 (76.8 MiB)
I've tried rebooting the server, deleting the contents of the /var/run/docker folder, and even reinstalling Docker. It's still determined that the IP address is 192.168.1.75.

You can set the machine where the docker command tries to connect with the DOCKER_HOST environment variable:
export DOCKER_HOST="tcp://192.168.1.227:2376"
But it's strange that you have to do this on a default installation, perhaps they is a DOCKER_HOST variable in you bash/zsh profile that cause this problem ?

Related

Raspberry Pi configured for static IP also gets a DHCP IP

I've configured my Raspberry Pi for static IP. My /etc/network/interfaces looks like this:
auto lo
iface lo inet loopback
auto eth0
allow-hotplug eth0
iface eth0 inet static
address 192.168.1.2
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
Yet for some strange reason, every time I reboot my Pi or my router, my Pi gets the requested IP (192.168.1.2) but ALSO a DHCP address (192.168.1.18). So my Pi has two addresses.
Of course, this isn't necessarily a problem, I just think it's strange. Am I doing something wrong? Or not enough? My router is almost completely locked down for management, but I can enter static IPs for devices - is this necessary, if I configure the Pi to do it?
The dynamic address isn't apparent in ifconfig:
eth0 Link encap:Ethernet HWaddr b8:27:eb:5d:87:71
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:236957 errors:0 dropped:34 overruns:0 frame:0
TX packets:260738 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:35215632 (33.5 MiB) TX bytes:70023369 (66.7 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:27258 errors:0 dropped:0 overruns:0 frame:0
TX packets:27258 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3397312 (3.2 MiB) TX bytes:3397312 (3.2 MiB)
yet I can ping, ssh and everything on .18 as well.
Since you can add multiple IP addresses to the interface eth0 as noted above, I believe the solution to your problem is to remove the auto eth0 line from your /etc/network/interfaces file.
The IP address attached to interface eth0 can be viewed by ip addr. May be eth0 has two IP address configured 192.168.1.2 and 192.168.1.18.
Also you can add multiple IP address to interface eth0 through
sudo ip addr add <IP address> dev eth0
If you dont want IP address 192.168.1.18 you can remove it by
sudo ip addr del 192.168.1.18 dev eth0

How to configure docker default bridge work with unusual network configuration?

Basically, at work I have a dhcp address assigned as:
eth0 Link encap:Ethernet HWaddr 5c:26:0a:5a:b8:48
inet addr:10.10.10.193 Bcast:10.10.10.255 Mask:255.255.255.0
inet6 addr: <addr here>/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3591236 errors:0 dropped:0 overruns:0 frame:0
TX packets:2057576 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3449424352 (3.4 GB) TX bytes:384131635 (384.1 MB)
Interrupt:20 Memory:e2e00000-e2e20000
and with this, my host, can connect to the internet just fine. But none of my docker machines can connect to the internet at work. Their configuration looks like this:
docker0 Link encap:Ethernet HWaddr 56:84:7a:fe:97:99
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:117799 errors:0 dropped:0 overruns:0 frame:0
TX packets:170586 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4858816 (4.8 MB) TX bytes:122237788 (122.2 MB)
Everything works when I'm at home sitting beneath a traditional 192.168 router switch.
So, I'm thinking, if I somehow get docker0 interface to sit natted behind eth0, then everything would work, both at home and at work. But I'm not familiar with configuring linux interfaces. I found an article that talked about almost the exact same problem, but changing following those commands to add interface br0 to 10.10.10.200/24 made the following symptoms arise:
My host no longer can resolve a domain name. Removing the interface
br0 made this immediately work again
The dockerized apps can now ping 4.2.2.1, but not 8.8.8.8 or 8.8.4.4 or resolve a domain name. Adding --dns 4.2.2.1 tp DOCKER_OPTS in /etc/default/docker.io does not solve the problem.
The dockerized apps no longer can ping 4.2.2.1 or 8.8.8.8 or 8.8.4.4 after the br0 interface is removed
I haven't changed iptables; it's using the default docker configuration changes for a basic ubuntu 14.04 host.
How do I best configure the interfaces in order that docker allow the dockerized applications to connect to the internet both at home and work?
A lot of details about networking and bridge can be found at:
Ubuntu: https://help.ubuntu.com/community/NetworkConnectionBridge
Docker: https://docs.docker.com/articles/networking/#building-your-own-bridge.
As I have a VM with Ubuntu 14.04, I'm not sure if that would reproduce the solution. However, I have the same exact situation in my office, where some VPN servers give the same exact default network IP that Docker uses by default on docker0 bridge. The behavior of being able to use Docker from the office and not being able to use docker when VPN'ed was really frustrating.
So, I have used the same strategy described at the link you used http://jpetazzo.github.io/2013/10/16/configure-docker-bridge-network/, but on RHEL 6.5 servers. However, I did try many different options to get it working:
Used a different IP range
Used a different mask
Try manual setup first, then automate the permanent solution.
I have the solution on RHEL 6.5 as follows:
[root#pppdc9prd6dq newww]# cat /etc/sysconfig/network-scripts/ifcfg-bridge0
TYPE=Bridge
DEVICE=bridge0
NETMASK=255.255.252.0
IPADDR=192.168.5.1
ONBOOT=yes
BOOTPROTO=none
NM_CONTROLLED=no
DELAY=0
Manually add bridge
Here are the steps for you to create a bridge manually:
1. Stop Docker
$ sudo service docker stop
2. Create the bridge
$ ip link add bridge0 type bridge
$ ip addr add 192.168.5.1/20 dev bridge0
$ ip link set bridge0 up
3. Update the Docker daemon to use the bridge
$ vim /etc/docker/daemon.json
$ { "bridge": "bridge0"}
4. Restart Docker
sudo service start docker
If everything is working fine, just permanently add the fix
Persistent
1. Same as manual
2. Update the following file
$ vim /etc/network/interfaces
auto bridge0
iface bridge0 inet static
address 192.168.5.1
netmask 255.255.252.0
bridge_ports dummy0
bridge_stp off
bridge_fd 0
3. Same as manual
4. Same as manual
And make sure that bridge-utils are installed on the server, otherwise the bridge interface won't come up.
Maybe that would work? Anyway, try anything here and we can discuss and change this solution. I'm sure more people will have problems with this when they start using Docker internally behind a VPN.

File System with Buildroot - Network & Keyboard issues

I'm using Buildroot to create a file system to run on a ARM target.
After a few attempts, I managed to make it work but I noticed a few problems.
There wasn't any package manager.
It's impossible to install new utilities. I've found this question about opkg and I'll try to include it before the compilation of Buildroot
The keyboard has been set with the us_US layout.
Is it possible to set the default keyboard layout to it_IT from buildroot instead of loading a configuration file with loadkmap in /etc/rcS?
The ping and the wget commands show
ping www.google.com
ping: bad address google.com
Is there any specific configuration to do in Buildroot in order to solve this problem?
The network point is the most important.
Here is the output of the ifconfig command:
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:25702 errors:0 dropped:0 overruns:0 frame:0
TX packets:25702 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 MB) TX bytes:0 (0.0 MB)
Regarding you question about keyboard layout: There is no way of setting the keyboard layout from the buildroot configuration.
You need to configure it from an init script under /etc/init.d (not in /etc/init.d/rcS; that script is solely used for running the scripts under /etc/init.d/* and should normally not be modified).
See e.g. http://git.buildroot.org/buildroot/tree/system/skeleton/etc/init.d/S40network for a simple template to base your init script on.

difference between eth0 and venet0 configuration on CentOS 6.3

I followed this tutorial to set up a nameser using BIND on my VPS built in CentOS 6.3 64-bit. I have two VPS servers, one is virtualized by Xen, and the other is by OpenVZ. I noticed that the two servers differ in their network interface: Xen server has eth0 (configured /etc/sysconfig/network-scripts/ifcfg-eth0), while OpenVZ has venet0 (/etc/sysconfig/network-scripts/ifcfg-venet0).
When I follow that tutorial, the nameserver on my Xen server is working well, but the nameserver on OpenVZ server does not work at all. This made me think why and what difference made them work differently for the nameserver?
The following is the result I got from "ifconfig" command:
[root#server1 data]# ifconfig
lo Link encap:Local Loopback
**inet addr:127.0.0.1** Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:112 errors:0 dropped:0 overruns:0 frame:0
TX packets:112 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:10819 (10.5 KiB) TX bytes:10819 (10.5 KiB)
venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:50.31.115.236 P-t-P:50.31.115.236 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:539325 errors:0 dropped:0 overruns:0 frame:0
TX packets:368277 errors:0 dropped:80 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:41142712 (39.2 MiB) TX bytes:37293025 (35.5 MiB)
As you can see that venet0 has its inet addr:127.0.0.1. Could someone help me with understanding the differences? Thanks
According to the information you included in your question venet0 has the IP 50.31.115.236. The 127.0.0.1 you see is for the special network interface lo.
Usually the first network interface is named eth0. Virtualizing with Xen doesn't change that as it is pretending to be just a normal hardware. OpenVZ is working a bit different and as I understand it the name of the ethernet device venet0 got set by the system administrator of the physical machine.
I can't take a look at the linked tutorial as I only get a blank page so can only give a general advice: Wherever it is showing eth0 use venet0 instead for the second system.

wireshark doesn't display icmp traffic between tow logical interafce

I add tow logical interfaces for test with the following commands :
# set link on physical Device Up
sudo ip link set up dev eth0
# create logical Interfaces
sudo ip link add link eth0 dev meth0 address 00:00:8F:00:00:02 type macvlan
sudo ip link add link meth0 dev meth1 address 00:00:8F:00:00:03 type macvlan
# order IP Addresses and Link
sudo ip addr add 192.168.56.5/26 dev meth0
sudo ip addr add 192.168.56.6/26 dev meth1
sudo ip link set up dev meth0
sudo ip link set up dev meth1
ifconfig
meth0 Link encap:Ethernet HWaddr 00:00:8f:00:00:02
inet addr:192.168.56.5 Bcast:0.0.0.0 Mask:255.255.255.192
inet6 addr: fe80::200:8fff:fe00:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:35749 errors:0 dropped:47 overruns:0 frame:0
TX packets:131 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3830628 (3.8 MB) TX bytes:15278 (15.2 KB)
meth1 Link encap:Ethernet HWaddr 00:00:8f:00:00:03
inet addr:192.168.56.6 Bcast:0.0.0.0 Mask:255.255.255.192
inet6 addr: fe80::200:8fff:fe00:3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:35749 errors:0 dropped:47 overruns:0 frame:0
TX packets:115 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3830628 (3.8 MB) TX bytes:14942 (14.9 KB)
I run "wireshark" to test traffic between meth0 and meth1 ,
so I execute ping 192.168.56.6 to generate icmp traffic but this traffic doesn't appear in wireshark .
there is a a problem in wireshark with logical interface ?
Is there a problem in wireshark with logical interface?
Probably not. You'll probably see the same problem with tcpdump, netsniff-ng, or anything else that uses PF_PACKET sockets for sniffing on Linux (Linux in general, not just Ubuntu in particular, or even Ubuntu, Debian, and other Debian-derived distributions).
Given that those are two logical interfaces on the same machine, traffic between them will not go onto the Ethernet - no Ethernet adapters I know of will receive packets that they transmit, so if the packet were sent on the Ethernet the host wouldn't see it, and there wouldn't be any point in wasting network bandwidth by putting that traffic on the network even if the Ethernet adapter would see its own traffic.
So if you're capturing on eth0, you might not see the traffic. Try capturing on lo instead.

Resources