Ping and use three octet ipv4 addresses. Why is it working? - linux

By accident I had a typo and discovered that I can ping and ssh to IP address 10.8.290 ... right one octet is missing. Can someone explain it to me? Is this part of the protocol or some linux-black-magic (I am using Debian)?
user#ws:~$ ping -c3 10.8.290
PING 10.8.290 (10.8.1.34) 56(84) bytes of data.
64 bytes from 10.8.1.34: icmp_req=1 ttl=62 time=0.910 ms
64 bytes from 10.8.1.34: icmp_req=2 ttl=62 time=0.686 ms
64 bytes from 10.8.1.34: icmp_req=3 ttl=62 time=0.708 ms
--- 10.8.290 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.686/0.768/0.910/0.100 ms
user#ws:~$ ssh root#10.8.290
The authenticity of host '10.8.290 (10.8.1.34)' can't be established.
ECDSA key fingerprint is 21:bd:7e:fb:1e:6d:1e:c1:e9:11:c0:a9:73:a8:cf:85.
Are you sure you want to continue connecting (yes/no)? no
Host key verification failed.

It works because you are using a value of 290 for the third byte of the ip address. A byte can store values from 0 to 255, giving 256 values. Since an IPv4 address is a 4 byte value passing 290 to the third byte leads to an integer overflow into the 4th byte -> 290 - 256 => 34
It has nothing to do with a protocol "feature". ping simply don't validate the values of the individual octets of the target command line argument and simply passes it to the lower level C function inet_aton() (aton means ascii to number). This results in pinging 10.8.1.34.
I'm not sure, but I expect other versions of ping (on Windows, BSD) behaving the same.

Related

In traceroute source code why the size is hard coded to 512 : u_char packet[512];

In traceroute while revising the source code, I saw that the ICMP inbound packet size is hard coded to 512.
I don't know why the size is limited to 512 bytes. What happen if the ICMP inbound packet size is greater than 512 bytes?
In general, there are 3 ways(I am not familiar with implementation using GRE protocol) to implement traceroute, sending ICMP Echo requests, UDP packets or TCP SYN packets with gradually increasing TTL value, starting with TTL value of 1.
if it sends ICMP echo request, it expects ICMP TIME Exceed message(8 bytes + IP header(20 bytes) + first 8 bytes of original datagram's data) or the destination is reached and returns an ICMP echo reply which is 20 + 8 bytes long. Though according to rfc 792, it allows for an arbitrary data length for echo request or reply , but traceroute needn't that.
if it sends UDP pakcets, it expects ICMP TIME Exceed message or the destination is reached and return a port unreachable message which is 20 + 8 + 20 bytes long. Maybe some implementations add some data, but it wouldn't be too much.
if it sends TCP SYN packets, the inbound packets should be ICMP TIME Exceed message, TCP SYN+ACK packet or TCP RST packet, all of them are much less than 512 bytes.

what is the use of buffer size in socket?

I am learn about the ping command implementation. In that I had a one doubt. The doubt is
I know the socket buffer is for how many bytes transfer at a time in the socket.
But how the ping command manages the socket buffer size.
$ ping -S 300 -s 400 google.com
PING google.com (216.58.197.46) 400(428) bytes of data.
72 bytes from maa03s20-in-f14.1e100.net (216.58.197.46): icmp_req=1 ttl=56 (truncated)
72 bytes from maa03s20-in-f14.1e100.net (216.58.197.46): icmp_req=2 ttl=56 (truncated)
72 bytes from maa03s20-in-f14.1e100.net (216.58.197.46): icmp_req=3 ttl=56 (truncated)
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 40.231/41.500/42.408/0.924 ms
In the above ping command I was given the socket buffer size as 300 and the single packet size as 400. The socket buffer size is 300 only. But the the ping transfer the 400 bytes at a time in that socket. How it is possible.
At the same time the received packet size is only 72. Why that is only 72.
Can any one please explain me.
The ping request only used/required 72 bytes of data. This is within your 300 buffer size (limit) and your 400 byte packet size. The ICMP packet likely contained 64 data bytes and 8 header bytes resulting in a 72 byte packet.
Since the ICMP packet only needed 72 bytes it isn't going to add unnecessary data to meet your 400 byte packet size.

Pipe in Ping command output

Ping command output:
PING google.com (216.58.197.78) 56(84) bytes of data.
......
--- google.com ping statistics ---
53 packets transmitted, 28 received, 47% packet loss, time 687ms
rtt min/avg/max/mdev = 50.360/57.587/88.878/9.458 ms, pipe 8, ipg/ewma 13.220/57.967 ms
The above output shows the statistics. Here, what is the pipe refers to ?
Its the count of maximum number of ICMP echo request packets pending responses.
See https://serverfault.com/questions/587730/what-does-pipe-number-mean-in-ping-output for information about the pipe in ping.
In the link provided by RaviTezu, check the last answer. It's not the number of packets lost. Rather, it's maximum number of outstanding requests that are not acknowledge at some point of time, but are eventually acknowledged.

iperf UDP over IPv6

I'm doing some UDP bandwidth tests using iperf (https://iperf.fr/) over IPv6. I have very bad results when using a Linux UDP client with the following command line:
iperf -u -V -c fe80::5910:d4ff:fe31:5b10%usb0 -b 100m
Investigating the issue with Wireshark I have seen there is some fragmentation while the client is sending data. To be more precise, I see UDP client outgoing packets with size 1510 bytes and 92 bytes, alternating.
For example, the UDP packets that I see have the following pattern (in size): 1510, 92, 1510, 92, 1510, 92,...,1510, 92,...
Reading iperf2 documentation I read the following for option (-l) :
The length of buffers to read or write. iPerf works by writing an array of len bytes a number of times. Default is 8 KB for TCP, 1470 bytes for UDP. Note for UDP, this is the datagram size and needs to be lowered when using IPv6 addressing to 1450 or less to avoid fragmentation. See also the -n and -t options.
I have tried to do the same bandwidth test by replacing the Linux iperf UDP client command line with the following:
iperf -u -V -c fe80::5910:d4ff:fe31:5b10%usb0 -b 100m -l1450
and I see good results. Looking at the Wireshark capture I see no fragmentation anymore.
Doing the same test over IPv4 I don't need to change the default UDP datagram size (I don't need to use '-l' option) to get good results.
So my conclusion is that fragmentation (over IPv6) is responsible for poor bandwidth performances.
Anyway, I'm wondering what really happens when setting UDP datagram size to 1450 over IPv6. Why do I have fragmentation over IPv6 and not over IPv4 with default value for UDP datagram size? Moreover, why do I have no fragmentation when reducing the UDP datagram size to 1450?
Thank you.
The base IPv4 header is 20 bytes, the base IPv6 header is 40 bytes and the UDP header is 8 bytes.
With IPv4 the total packet size is 1470+8+20=1498, which is less than the default ethernet MTU of 1500.
With IPv6 the total packet size is 1470+8+40=1518, which is more than 1500 and has to be fragmented.
Now let's look into your observations. You see packets of size 1510 and 92. Those include the ethernet header, which is 14 bytes. Your IPv6 packets are therefore 1496 and 78 bytes. The contents of the big packet are: IPv6 header (40 bytes), a fragmentation header (8), the UDP header (8) and 1440 bytes of data. The smaller packet contains the IPv6 header (40), a fragmentation header (8) and the remaining 30 bytes of data.
The most common MTU for Ethernet is 1500, not including ethernet frame headers. This means you can send 1500 bytes in one packet over the wire, including IP headers. IPv6 headers are larger than IPv4 headers for several reasons, with the most important being that IPv6 addresses are larger than IPv4. So when you run with the default value over IPv6 your packet size goes over the MTU size and the packet needs to be split into two; a procedure known as fragmentation.

How does the ping "-n" flag work? AND does it affect response time? (Linux)

My question has 2 sides to it, first I want to know how does the "-n" flag work for the ping command in Linux, in particular what step does the ping program avoid doing when I place a "-n" flag and how does this optimize the query(or speeds up the response, if so)? and I also want to know what is a symbolic name for a host? which i need to know to understand the -n flag.
My research:
The ping manual from the Linux terminal(man ping) says: "-n: Numeric output only. No attempt will be made to lookup symbolic names for host addresses".
From that quote I already have a doubt: I don't know what a symbolic name for a host is, and googling didn't help...
Then I tested the ping command with the flag -n and without it, the hostname used was www.gmail.com, and the results only differ in the "FROM" section of the output:
Without -n I got: 64 bytes from mia07s27-in-f165.1e100.net (216.58.219.165)
Using -n i got: 64 bytes from 216.58.219.165
So I am guessing that the symbolic name of this particular host is: "mia07s27-in-f165.1e100.net", but what does it mean? and how is the process of getting this symbolic name done?
My thoughts:
I have a theory about how this might work:
By default, when i don't include the "-n" flag, the ping program sends the host Echo Request packets which will have a bit indicating that i didn't use the "-n" flag, so the host responds with Echo Response packets that include information about his symbolic name. When I do use the "-n" flag the bit will turn to 1(or 0, who knows..) and this will indicate the host that he doesn't have to include the extra information about his symbolic name, which in turn will make the ping response time lower, but this time is insignificant and won't be noticeable.
Can you guys validate this theory I made up? is it correct?
I've never used the -n flag before, but it's rather trivial. A symbolic name is the normal name of website you would use, so lets say "google.com". However, as we know, theres no such thing as "google.com", but rather an address that corresponds to this (like 192.168.10.1, for instance). Although you might know this already, the service that does this uses DNS, Domain Name Servers. DNS maps these symbolic names to numeric addresses.
Thus according to the man page, the -n flag will save you a DNS query.
Just now on my Linux distro I ping'd google with ping google.com. This was the result:
PING google.com (173.194.123.6) 56(84) bytes of data.
64 bytes from lga15s46-in-f6.1e100.net (173.194.123.6): icmp_seq=1 ttl=55 time=13.4 ms
64 bytes from lga15s46-in-f6.1e100.net (173.194.123.6): icmp_seq=2 ttl=55 time=13.5 ms
64 bytes from lga15s46-in-f6.1e100.net (173.194.123.6): icmp_seq=3 ttl=55 time=13.5 ms
As you can see when you ping without the -n option you will do a dns query to identify the IP address of the incoming Echo Request Packets.
Then, if you ping google.com with ping -n google.com you will see something like this:
PING google.com (173.194.123.101) 56(84) bytes of data.
64 bytes from 173.194.123.101: icmp_seq=1 ttl=56 time=4.02 ms
64 bytes from 173.194.123.101: icmp_seq=2 ttl=56 time=4.07 ms
64 bytes from 173.194.123.101: icmp_seq=3 ttl=56 time=4.01 ms
Here you can see that with -n, we didnt query dns to search for what symbolic name the IP address 173.194.123.101 stood for. Also, you can see by the timing that without this query the ping time was actually shortened.

Resources