I log into a machine using ssh with IP1. When logged in, the output of ifconfig/'ip addr' shows IP2. Why is IP1 and IP2 different? - linux

I type:-
ssh root#10.2.4.xx
So, IP1 :- 10.2.4.xx
When logged into the machine, the output of
ifconfig
is:-
eth0 Link encap:Ethernet HWaddr fa:xx:xx:xx:xx:xx
inet addr:172.17.xx.xx Bcast:172.17.xx.xxx Mask:255.255.255.0
inet6 addr: fe80::xxxx:xxxx:xxxx:xxxx/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
.
.
.
So, IP2 is 172.17.xx.xx.
Shouldn't IP1 and IP2 be the same? Why are they different?

The most likely reason is that the computer you are logging to has two network interface cards, one (eth0) configured with IP1 and the other (probably eth1?) configured with IP2. The output of ifconfig should show both.
Other less likely reasons are:
your ssh config file .ssh/config has an entry which reads 'host 10.2.4.xx hostname 172.17.xx.xx`
your /etc/hosts has a line 10.2.4.xx 172.17.xx.xx
the .bashrc of root on IP1 contains ssh -t 172.17.xx.xx

Related

Pass IP-address to cloud-init metadata

I am searching for a way to pass a ip-address to cloud-init metadata. So when my qcow boots, it does not have to wait for 120 - 180 seconds to boot.
Currently I've created a workaround by adding IP-address information to the userdata part of cloud-init. The con is, it does take some time because cloud-init userdata is only executed after booting the VM.
echo -e "
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
\taddress $address
\tnetmask $netmask
\tgateway $gateway
\tdns-nameservers $dnsnameservers
" > /etc/network/interfaces
ifdown eth0; ifup eth0
Currently to set the hostname in cloud-init metadata, I've already obtained this part:
cat > $config_dir/meta-data <<-EOF
instance-id: $uuid
hostname: $hostname
local-hostname: $hostname
EOF
But I need something more solid, more concrete because the cloud-init documentation is very vague
EDIT: I've seen that it is possible because Openstack can do it.
It is possible by using the following format:
network-interfaces: |
auto lo
iface lo inet loopback
iface eth0 inet static
address xxx.xxx.xxx.xxx (static ip)
netmask xxx.xxx.xxx.xxx
gateway xxx.xxx.xxx.xxx (gateway ip, usually ip address of router)
You basically write out the configuration you want your interfaces to have as an interfaces file would look as long as you have 'network-interfaces: |'
Hope that answers your question!

Why Docker containers can't communicate with each other?

I have created a small project to test Docker clustering. Basically, the cluster.sh script launches three identical containers, and uses pipework to configure a bridge (bridge1) on the host and add an NIC (eth1) to each container.
If I log into one of the containers, I can arping other containers:
# 172.17.99.1
root#d01eb56fce52:/# arping 172.17.99.2
ARPING 172.17.99.2
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=0 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=1 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=2 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=3 time=1.001 sec
^C
--- 172.17.99.2 statistics ---
5 packets transmitted, 4 packets received, 20% unanswered (0 extra)
So it seems packets can go through bridge1.
But the problem is I can't ping other containers, neither can I send any IP packets through via any tools like telnet or netcat.
In contrast, the bridge docker0 and NIC eth0 work correctly in all containers.
Here's my route table
# 172.17.99.1
root#d01eb56fce52:/# ip route
default via 172.17.42.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.17
172.17.99.0/24 dev eth1 proto kernel scope link src 172.17.99.1
and bridge config
# host
$ brctl show
bridge name bridge id STP enabled interfaces
bridge1 8000.8a6b21e27ae6 no veth1pl25432
veth1pl25587
veth1pl25753
docker0 8000.56847afe9799 no veth7c87801
veth953a086
vethe575fe2
# host
$ brctl showmacs bridge1
port no mac addr is local? ageing timer
1 8a:6b:21:e2:7a:e6 yes 0.00
2 8a:a3:b8:90:f3:52 yes 0.00
3 f6:0c:c4:3d:f5:b2 yes 0.00
# host
$ ifconfig
bridge1 Link encap:Ethernet HWaddr 8a:6b:21:e2:7a:e6
inet6 addr: fe80::48e9:e3ff:fedb:a1b6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:163 errors:0 dropped:0 overruns:0 frame:0
TX packets:68 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:8844 (8.8 KB) TX bytes:12833 (12.8 KB)
# I'm showing only one veth here for simplicity
veth1pl25432 Link encap:Ethernet HWaddr 8a:6b:21:e2:7a:e6
inet6 addr: fe80::886b:21ff:fee2:7ae6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:155 errors:0 dropped:0 overruns:0 frame:0
TX packets:162 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12366 (12.3 KB) TX bytes:23180 (23.1 KB)
...
and IP FORWARD chain
# host
$ sudo iptables -x -v --line-numbers -L FORWARD
Chain FORWARD (policy ACCEPT 10675 packets, 640500 bytes)
num pkts bytes target prot opt in out source destination
1 15018 22400195 DOCKER all -- any docker0 anywhere anywhere
2 15007 22399271 ACCEPT all -- any docker0 anywhere anywhere ctstate RELATED,ESTABLISHED
3 8160 445331 ACCEPT all -- docker0 !docker0 anywhere anywhere
4 11 924 ACCEPT all -- docker0 docker0 anywhere anywhere
5 56 4704 ACCEPT all -- bridge1 bridge1 anywhere anywhere
Note the pkts cound for rule 5 isn't 0, which means ping has been routed correctly (FORWARD chain is executed after routing right?), but somehow didn't reach the destination.
I'm out of ideas why docker0 and bridge1 behave differently. Any suggestion?
Update 1
Here's the tcpdump output on the target container when pinged from another.
$ tcpdump -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
22:11:17.754261 IP 192.168.1.65 > 172.17.99.1: ICMP echo request, id 26443, seq 1, length 6
Note the source IP is 192.168.1.65, which is the eth0 of the host, so there seems to be some SNAT going on on the bridge.
Finally, printing out the nat IP table revealed the cause of the problem:
$ sudo iptables -L -t nat
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
...
Because my container's eth0's IP is on 172.17.0.0/16, packets sent have their source IP changed. This is why the responses from ping can't go back to the source.
Conclusion
The solution is to change the container's eth0's IP to a different network than that of the default docker0.
Copied from Update 1 in question
Here's the tcpdump output on the target container when pinged from another.
$ tcpdump -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
22:11:17.754261 IP 192.168.1.65 > 172.17.99.1: ICMP echo request, id 26443, seq 1, length 6
Note the source IP is 192.168.1.65, which is the eth0 of the host, so there seems to be some SNAT going on on the bridge.
Finally, printing out the nat IP table revealed the cause of the problem:
$ sudo iptables -L -t nat
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
...
Because my container's eth0's IP is on 172.17.0.0/16, packets sent have their source IP changed. This is why the responses from ping can't go back to the source.
Conclusion
The solution is to change the container's eth0's IP to a different network than that of the default docker0.

ip route add by specifying source address in the same network

I have 4 pc´s and another pc, call it proxy, all being in the same network: 172.16.96.0/20 . I can ping between each other.
But, I want to separate them into 2. That is:
pc1 is directly connected to pc2
pc3 is directly connected to pc4
But,
all traffic from pc1 or pc2 to pc3 or pc4 has to go through proxy and
all traffic from pc3 or pc4 to pc1 or pc2 has to go through proxy
pc1 pc3
| -proxy- |
pc2 pc4
pc1 IP: 172.16.97.24
pc3 IP: 172.16.97.27
proxy IP: 172.16.97.2
To do that on pc1 I added:
ip route add 172.16.97.27 via 172.16.97.2
But, when I do traceroute 172.16.97.27, 172.16.97.2 does not appear as a hop..I am not sure if it should..
On proxy the routing table looks like:
default via 172.16.111.254 dev eth0
172.16.96.0/20 dev eth0 proto kernel scope link src 172.16.97.2
I think I should add another source that is pc1 172.16.97.24.
And to be able to forward the traffic received from pc1 (172.16.97.24) to its destination(either pc3 or pc4), I used this:
ip route add 172.16.96.0/20 via 0.0.0.0 src 172.16.97.24
Error: RTNETLINK answers: No such device
ip route add 172.16.96.0/20 dev eth0:0 via 0.0.0.0 src 172.16.97.24
Error: RTNETLINK answers: Invalid argument
and:
ip route add 172.16.96.0/20 src 172.16.97.24
Error: RTNETLINK answers: No such device
I am not sure if I am going on the right path to do this configuration. Please tell me if not. Thank you!
I managed to solve the problem by adding on the proxy the following:
# sysctl net.ipv4.ip_forward=1 or add net.ipv4.ip_forward=1 in /etc/sysctl.conf (to keep it after you close the terminal)
# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-A POSTROUTING Append a rule to the POSTROUTING chain
-o eth0 this rule is valid for packets that leave on the eth0 network interface (-o stands for "output")
-j MASQUERADE the action that should take place is to 'masquerade' packets, i.e. replacing the sender's address by the router's address.
And I added on pc1,pc2,pc3,pc4:
ip route add pcDestIP via proxy
Where pcDest ip is pc3 and pc4 in case I am writing the rule on pc1.
More info : http://www.karlrupp.net/en/computer/nat_tutorial
and here: https://serverfault.com/questions/306024/how-to-route-network-traffic-of-a-host-via-another-host

nc -u 192.168.1.255 9999 fails

I am trying to broadcast to 192.168.1.255 which is my broadcast address. ifconfig says
eth0 Link encap:Ethernet HWaddr 50:e5:49:51:0b:cb
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::52e5:49ff:fe51:bcb/64 Scope:Link
but If i do nc -uv 192.168.1.255 9999 it reports
nc: connect to 192.168.1.255 port 9999 (udp) failed: Permission denied
but nc -uv 192.168.0.255 9999 works fine
Try using socat instead since some nc tools don't support UDP broadcasting.
echo "HELLO" | socat - UDP4-DATAGRAM:192.168.1.255:9999,broadcast

linux device driver for pure ipv6 device

I am currently designing a linux driver for a pure IPv6 driver. Is there any way to make the kernel module only support IPv6 and can only be assigned IPv6 address? What is the commands in linux to set the address?
Thanks
Adding IP:
Using ip command:
$sudo /sbin/ip -6 addr add 2001:0db8:0:f101::1/64 dev eth0
Using ifconfig command:
$sudo /sbin/ifconfig eth0 inet6 add 2001:0db8:0:f101::1/64
Deleting IP:
Using ip
$sudo /sbin/ip -6 addr del 2001:0db8:0:f101::1/64 dev eth0
Using ifconfig
$sudo /sbin/ifconfig eth0 inet6 del 2001:0db8:0:f101::1/64

Resources