Ping status and return - linux

I am writing a perl script which issues a ping to a certain IP address, with a ping size of 65000 and count of 1000.
Now when the remote PC is up, things are okay. The ping succeeds and ends after sending 1000 pkts.
However, in case of failure, it always returns "Destination host unreachable". Ping keeps on trying for too long trying to send arp requests/ping requests before it eventually gives up with a 100% pkt loss string.
My question is, how can I make ping to exit if lets say the initial 100 pings itself do not generate a response. I do not want to wait for too long in case initial pings itself fail. I want ping to exit. How do I do this?
I am currently using Linux for my script. Please let me know how to do this for
Linux
Windows.
[Please note the size of the ping pkt can vary. So I want a solution which is independent of the size/count]

I would recommend using the Net::Ping module which gives you the flexibility to control individual pings directly from your script.

Related

ICMP timestamps added to ping echo requests in linux. How are they represented to bytes? Or how to convert UNIX Epoch time to that timestamp format?

I encountered this issue while trying to make a ping program in C myself and used wireshark for further digging into the problem: ping which sends echo requests to a destination IP also ads a timestamp field of 8 bytes (TOD timestamp) after the ICMP header in linux. Ping in Windows doesn't add that timestamp but rather I think makes the time calculations locally. Now my question is how do you convert the time from Unix Epoch format (the number of seconds from 1970 which you get with the 'time' function in C) to that TOD format of 8 bytes? I got to this question as, finally, after quite a time of research, my ping.c program sends the ICMP echo request message to the destination, where after a test with 2 hosts I noticed that it manages to arrive, but gets no ping echo reply message back while the native linux ping works properly. I can only imagine 2 possible causes:
I didnt complete well the fields of the ICMP and IP header. To be honest, I myself pretty much doubt this possiblity because wireshark shows the message arrives to the destination and is recognized as an echo request message, but doesn't trigger any echo reply answer. However, if it would to be this, the only thing I can think off is that timestamp which I don`t know how to convert in TOD form to occupy at most 8 bytes.
There might be a firewall at the destination or some other system dependent fact.
https://www.ibm.com/docs/en/zos/2.2.0?topic=problems-using-ping-command
The Ping command does not use the ICMP/ICMPv6 header sequence number field (icmp_seq or icmp6_seq) to correlate requests with ICMP/ICMPv6 Echo Replies. Instead, it uses the ICMP/ICMPv6 header identifier field (icmp_id or icmp6_id) plus an 8-byte TOD time stamp field to correlate requests with replies. The TOD time stamp is the first 8-bytes of data after the ICMP/ICMPv6 header.
Finally, to repeat the initial question:
How do you convert the UNIX Epoch time to the TOD timestamp form which linux ping adds at the end of the ICMP header/begining of data field?
An useful explanation, but I don't think sufficient I found here:
https://towardsdatascience.com/3-tips-to-handle-timestamps-in-c-ad5b36892294
I should probably mention I`m working on Ubuntu 20.04 focalfossa.
I found a related post here. The book "Principles of Operation" is mentioned in the comments. I skimmed through it but it seems to be generally lower level than C so if anyone knows another place/way to answer the question it would be better.
Background
Including the UNIX timestamp of the time of transmission in the first data bytes of the ICMP Echo message is a trick/optimization the original ping by Mike Muuss used to avoid keeping track of it locally. It exploits the following guarantee made by RFC 792's Echo or Echo Reply Message description:
The data received in the echo message must be returned in the echo reply message.
Many (if not all) BSD ping implementations are based on Mike Muuss' original implementation and therefore kept this behavior. On Linux systems, ping is typically provided by iputils, GNU inetutils, or Busybox. All exhibit the same behavior. fping is a notable exception, which stores a mapping from target host and sequence number to timestamp locally.
Implementations typically store the timestamp in the sender's native precision and byte order as opposed to a well-defined precision in conventional network byte order (big endian), that is normally used for data transmitted over the network, as it intended to be only be interpreted by the original sender and others should just treat it as opaque stream of bytes.
Because this is so common however, the Wireshark ICMP dissector (as of v3.6.2) tries to be clever and heuristically decode it nonetheless, if the first 8 data bytes look like a struct timeval timestamp in 32-bit precision for seconds and microseconds in either byte order. Please note that if the sender was actually using big endian 64-bit precision, this will fail and if it was using little endian 64-bit precision, it will truncate the microseconds before the Epochalypse and fail after that.
Obtaining and serializing epoch time
To answer your actual question:
How do you convert the UNIX Epoch time to the TOD timestamp form which linux ping adds at the end of the ICMP header/begining of data field?
Most implementations use the obsolescent gettimeofday(2) instead of the newer clock_gettime(2). The following snippet is taken from iputils:
struct icmphdr *icp /* = pointer to ICMP packet */;
struct timeval tmp_tv;
gettimeofday(&tmp_tv, NULL);
memcpy(icp + 1, &tmp_tv, sizeof(tmp_tv));
memcpy from a temporary variable instead of directly passing the icp + 1 as target to gettimeofday is used to avoid potential improper alignment, effective type and strict aliasing violation issues.
I appreciate the clear answers. I actually managed to solve the problem and make the ping function work and I have to say the problem was certainly not the timestamp because, yes indeed, i was talking about the "Echo or Echo Reply Message". One way of implementing ping is by using the ICMP feature of Echo and Echo reply messages. The fact is when I put this question I was obviously stuck probably because I wasn't clearly differentiating the main aspects of the problem. Thus, I started to examine the packet sent by the native ping on my Ubuntu 20.04 focal_fossa (with Wireshark), hopefully trying to get a better grasp of how to fill the headers of the packet sent by my program (the IP and ICMP headers). This question simply arised from the fact that I noticed that in this version of Ubuntu, ping adds a timestamp (indeed of 32 bits) after the end of ICMP header, basically in the data/payload section. As a matter of fact, I also used Wireshark on Windows10 and saw that there is no timestamp added after the header. So yes, it might be about different versions of the program being used.
What is the main point I want to emphasize is that my final version of ping has nothing to do with any timestamps. So yes, they are not a crucial aspect for ping to work.

Linux. Can packets pass libpcap by?

I am writing a linux program that controls internet traffic. In other words, how much bytes I have used while some amount of time. I use a Pcap4J for java (implementation of libpcap) and I have question about it. What happens if my program hasn't proceeded a package while a new one has arrived.
1. It slows down the download(upload) rate for the whole OS?
2. It skips a new one, and my program will never know that it passed by?
In other words, I've downloaded the 1G of data on my computer. How many bytes my program get: 100% or it may be passed my program by but still got the destination place!
And give me know if it is a bad idea to write a control traffic app using this lib!
Your application loses packets. In your words, they pass by.
However, if your idea is to have a metric of how many packets went in and out of your system in a given time, there are definitely better ways to achieve it.
On Linux you can just do a script that does something like this:
DEVICE=eth0
RX0=$(cat /sys/net/$DEVICE/statistics/rx_bytes)
TX0=$(cat /sys/net/$DEVICE/statistics/tx_bytes)
while : ; do
sleep 5
RX1=$(cat /sys/net/$DEVICE/statistics/rx_bytes)
TX1=$(cat /sys/net/$DEVICE/statistics/tx_bytes)
echo "RX bytes: $(($RX1-$RX0))"
echo "TX bytes: $(($TX1-$TX0))"
RX0=RX1
TX0=TX1
done
You can adjust times or whether is a parameter, I think you'll get the idea.

Dropping packets with netcat using a UDP transfer?

I'm working on sending large data files between two Linux computers via a 10 Gigabit Ethernet cable and netcat with a UDP transfer, but seem to be having issues.
After running several tests, I've come to the conclusion that netcat is the issue. I've tested the UDP transfer using [UDT][1], [Tsunami-UDP]2, and a Python UDT transfer as well, and all of which have not had any packet loss issues.
On the server side, we've been doing:
cat "bigfile.txt" | pv | nc -u IP PORT
then on the client side, we've been doing:
nc -u -l PORT > "outputFile.txt"
A few things that we've noticed:
On one of the computers, regardless of whether it's the client or server, it just "hangs". That is to say, even once the transfer is complete, Linux doesn't kill the process and move to the next line in the terminal.
If we run pipe view on the receiving side as well, the incoming data rate is significantly lower than what the sending side thinks it's sending.
Running Wireshark doesn't show any packet loss.
Running the system performance monitor in Linux shows that the incoming data rate (for the receiving side) is the same as the outgoing data rate from the sending side. This is in contrast to what pipe view thinks (see #2)
We're not sure where the issue is with netcat, and if there is a way around it. Any help/insights would be greatly appreciated.
Also, for what it's worth, using netcat with a TCP transfer works fine. And, I do understand that UDP isn't known for reliability, and that packet loss should be expected, but it's the protocol we must use.
Thanks
It could well be that the sending instance is sending the data too fast for the receiving instance. Note that this can occur even if you see no drops on the receiving NIC (as you seem to be saying), because the loss can occur at OS level instead. Your OS could have its UDP buffers overflowing. Run this command:
watch -d "cat /proc/net/snmp | grep -w Udp"
To see if your RcvbufErrors field is non-zero and/or growing while your file transfer is going on.
This answer (How to send only one UDP packet with netcat?) says that nc sends one packet per line. Assuming that's true, this could lead to a significantly higher number of packets than your other transfer mechanisms. Presumably, as #Smeeheey suggested, you're running out of receive buffers on the receiving end.
To cause your sending end to exit, you can add -q 1 to the command line (exit 1 second after seeing end of file).
But there's no way that the the receiving end nc can know when the transfer is complete. This is why these other mechanisms are "protocols" -- they have mechanisms built into them to communicate the bounds of a file. Raw UDP has no concept of end of file.
Tuning the Linux networking stack is a bit complicated, as there are many components to tune to figure out where data is being dropped.
If possible/feasible, I'd recommend that you start by monitoring packet drops throughout the entire network stack. Once you've done that, you can determine where exactly packets are being dropped and then adjust tuning parameters as needed. There are a lot of different files to measure with lots of different fields. I wrote a detailed blog post about monitoring and tuning each component of the Linux networking stack from top to bottom. It's a bit difficult to summarize all the information there, but take a look, I think it can help guide you.

Calculate Sent and Received PING Packets at run-time in Linux

I have to calcuate sent and received PING packets at run-time in Linux. Now in Linux, even with verbose, nothing gets printed if packets are not received. Prints are only for successful replies, destination host unreachable.
How can sent and received packets be seen at run-time on the terminal? Any method by which this can be accomplished?
The simplest solution - if you want to see all sends and all receives is to actually make the source do that. The source for the ping command is widely available and can be edited to make it do what you want.
That said, if you don't want to actually edit the source, because it doesn't suit, you really should use the -c option, for the count of packets to send, and use the command to send one at a time. The return code from the command can be used to determine if a packet was seen, and you can use (roughly) the time that the command started at for the origin time of the packet.
Bear in mind ping it quite deterministic in it's behaviour. By default, it sends one packet per second, so you should be easily able to do the math based on how long it runs for and the count of packets you tried to use.

Netlink error while using IP queue

I seem to have an issue with IP queue.
I have a linux machine that I am using to run some experiments.
The linux machine is configured to be a router, having two NICs, connecting two other computers, and managing their network traffic.
All incoming packages are captured, using iptables, and analyzed by a C application.
The application analyzing the packets has a built-in delay, as part of the experiment.
So I have one very fast computer sending packets through my linux-router and a (relatively) slow linux-router that analyses and deals with the packets, one by one.
This situation leads to the fact that when I fire up a sender application on one of the computers connected to the linux-router, my IP queue on the linux-router gets filled up (almost) instantaneously.
The IP queue's max length is currently set to 1024, and if it overflows, the packets are dropped. This is expected and i'm OK with it.
But, (and this is where it gets interesting), every now and then I get the following error:
"Failed to receive netlink message: No buffer space available"
At start, I thought this was due to the IP queue overflow, but after some analysis i found that sometimes I get the error even if the IP queue buffer did not overflow, and sometime I DON'T get the message even though the buffer DID overflow.
When I run > cat /proc/net/ip_queue, I get the following table (also used to monitor the IP queue overflow):
Peer PID : 27389
Copy mode : 2
Copy range : 65535
Queue length : 0
Queue max. length : 1024
Queue dropped : 1166875
Netlink dropped : 2916
Looking at the last two values, Queue dropped seems to refer to packets that did not manage to get into the IP queue because the buffer was full. I can see this value rise as I bombard the linux-router. Netlink dropped ( as it's name implies :) ) seems to have to do with the error i'm getting.
I did my best to search for material on this error, but wasn't able to find anything that seemed to point me in the required direction.
Bottom line: Why am I getting this error and what can I do to avoid it?
try increasing the receive buffer space on your netlink receive socket using setsockopt .. this should remove the error !

Resources