The receiving platform is Linux-2.6.37_DM8127_IPNC_3.80.00 (TI's DaVinci processor).
In the course of a discovery process, I make a short broadcast (Win7) - the expected response to which is several bytes longer than the length of the broadcast.
I always get a partial response - having exactly the length of the broadcast. This happens regardless of the computer, firewall being on/off, antivirus etc.
Padding the broadcast to the length of the expected response - and I get the desired response.
Is this some LINUX feature - or should I keep on searching for some bug in my code?
Related
I need to create a monitor, which will log information about packet missing using ZeroMQ ipc. Actually I don't really understand everything about it because of there are some LINX, TIPS protocols also. Can you please explain me that and answer the main question?
You could make the application self-monitoring, by including a message serial number in each message structure. The message sender keeps track of the serial number it last sent, and increments it every time it sends a message.
The recipient should then be receiving messages with ever-increasing message serial numbers embedded. If that ever jumps by 2 or more, a message has gone missing.
IPC is not lossy like a network can be - the bytes put in come out the other end. TCP is not lossy either, provided both ends are still running and the network itself hasn't failed. However, depending on the ZMQ pattern used and how it's set up whole messages can be undelivered (for example, if the recipient hasn't connected yet, etc). If that's what you mean by "packet missing", it would be revealed by including an incrementing message serial number.
I am developing a program that sniffs network packets using a raw socket (AF_PACKET, SOCK_RAW) and processes them in some way.
I am not sure whether my program runs fast enough and succeeds to capture all packets on the socket. I am worried that the recieve buffer for this socket occainally gets full (due to traffic bursts) and some packets are dropped.
How do I know if packets were dropped due to lack of space in the
socket's receive buffer?
I have tried running ss -f link -nlp.
This outputs the number of bytes that are currently stored in the revice buffer for that socket, but I can not tell if any packets were dropped.
I am using Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-52-generic x86_64).
Thanks.
I was having a similar problem as you. I knew that tcpdump was able to to generate statistics about packet drops, so I tried to figure out how it did that. By looking at the code of tcpdump, I noticed that it is not generating those statistic by itself, but that it is using the libpcap library to get those statistics. The libpcap is on the other hand getting those statistics by accessing the if_packet.h header and calling the PACKET_STATISTICS socket option (at least I think so, but I'm no C expert).
Therefore, I saw only two solutions to the problem:
I had to interact somehow with the linux header files from my Pyhton script to get the packet statistics, which seemed a bit complicated.
Use the Python version of libpcap which is pypcap to get those information.
Since I had no clue how to do the first thing, I implemented the second option. Here is an example how to get packet statistics using pypcap and how to get the packet data using dpkg:
import pcap
import dpkt
import socket
pc=pcap.pcap(name="eth0", timeout_ms=10000, immediate=True)
def packet_handler(ts,pkt):
#printing packet statistic (packets received, packets dropped, packets dropped by interface
print pc.stats()
#example packet parsing using dpkt
eth=dpkt.ethernet.Ethernet(pkt)
if eth.type != dpkt.ethernet.ETH_TYPE_IP:
return
ip =eth.data
layer4=ip.data
ipsrc=socket.inet_ntoa(ip.src)
ipdst=socket.inet_ntoa(ip.dst)
pc.loop(0,packet_handler)
tpacket_stats structure is defined in linux/packet.h header file
Create variable using the tpacket_stats structre and pass it to getSockOpt with PACKET_STATISTICS SOL_SOCKET options will give packets received and dropped count.
-- some times drop can be due to buffer size
-- so if you want to decrease the drop count check increasing the buffersize using setsockopt function
First off, switch your operating system.
You need a reliable, network oriented operating system. Not some pink fluffy "ease of use" with "security" functionality enabled. NetBSD or Gentoo/ArchLinux (the bare installations, not the GUI kitted ones).
Start a simultaneous tcpdump on a network tap and capture the traffic you're supposed to receive along side of your program and compare the results.
There's no efficient way to check if you've received all the packets you intended to on the receiving end since the packets might be dropped on a lower level than you anticipate.
Also this is a question for Unix # StackOverflow, there's no programming here what I can see, at least there's no code.
The only certain way to verify packet drops is to have a much more beefy sender (perhaps a farm of machines that send packets) to a single client, record every packet sent to your reciever. Have the statistical data analyzed and compared against your senders and see how much you dropped.
The cheaper way is to buy a network tap or even more ad-hoc enable port mirroring in your switch if possible. This enables you to dump as much traffic as possible into a second machine.
This will give you a more accurate result because your application machine will be busy as it is taking care of incoming traffic and processing it.
Further more, this is why network taps are effective because they split the communication up into two channels, the receiving and sending directions of your traffic if you will. This enables you to capture traffic on two separate machines (also using tcpdump, but instead of a mirrored port, you get a more accurate traffic mirroring).
So either use port mirroring
Or you buy one of these:
I am writing a serial port application using VC++, in which I can open a port on a switch device, send some commands and display their output. I am running a thread which always read open port for output of given command. My main thread waits until read completes, but problem is how do I recognize that command output ends, and I should signal main thread.
Almost any serial port communication requires a protocol. Some way for the receiver to discover that a response has been received in full. A very simple one is using a unique byte or character that can never appear in the rest of the data. A linefeed is standard, used by any modem for example.
This needs to get more elaborate when you need to transfer arbitrary binary data. A common solution for that is to send the length of the response first. The receiver can then count down the received bytes to know when it is complete. This often needs to be embellished with a specific start byte value so that the receiver has some chance to re-synchronize with the transmitter. And often includes a checksum or CRC so that the receiver can detect transmission errors. Further embellishments then is to make errors recoverable with ACK/NAK responses from the receiver. You'd be then well on your way in re-inventing TCP. The RATP protocol in RFC-916 is a good example, albeit widely ignored.
I seem to have an issue with IP queue.
I have a linux machine that I am using to run some experiments.
The linux machine is configured to be a router, having two NICs, connecting two other computers, and managing their network traffic.
All incoming packages are captured, using iptables, and analyzed by a C application.
The application analyzing the packets has a built-in delay, as part of the experiment.
So I have one very fast computer sending packets through my linux-router and a (relatively) slow linux-router that analyses and deals with the packets, one by one.
This situation leads to the fact that when I fire up a sender application on one of the computers connected to the linux-router, my IP queue on the linux-router gets filled up (almost) instantaneously.
The IP queue's max length is currently set to 1024, and if it overflows, the packets are dropped. This is expected and i'm OK with it.
But, (and this is where it gets interesting), every now and then I get the following error:
"Failed to receive netlink message: No buffer space available"
At start, I thought this was due to the IP queue overflow, but after some analysis i found that sometimes I get the error even if the IP queue buffer did not overflow, and sometime I DON'T get the message even though the buffer DID overflow.
When I run > cat /proc/net/ip_queue, I get the following table (also used to monitor the IP queue overflow):
Peer PID : 27389
Copy mode : 2
Copy range : 65535
Queue length : 0
Queue max. length : 1024
Queue dropped : 1166875
Netlink dropped : 2916
Looking at the last two values, Queue dropped seems to refer to packets that did not manage to get into the IP queue because the buffer was full. I can see this value rise as I bombard the linux-router. Netlink dropped ( as it's name implies :) ) seems to have to do with the error i'm getting.
I did my best to search for material on this error, but wasn't able to find anything that seemed to point me in the required direction.
Bottom line: Why am I getting this error and what can I do to avoid it?
try increasing the receive buffer space on your netlink receive socket using setsockopt .. this should remove the error !
I am calling a recvfrom api of a valid address , where i am trying to read data of size 9600 bytes , the buffer i have provided i of size 12KB , I am not even getting select read events.
Even tough recommended MTU size is 1.5 KB, I am able to send and receive packets of 4 KB.
I am using android NDK , (Linux) for development.
Please help . Is there a socket Option i have to set to read large buffers ?
If you send a packet larger than the MTU, it will be fragmented. That is, it'll be broken up into smaller pieces, each which fits within the MTU. The problem with this is that if even one of those pieces is lost (quite likely on a cellular connection...), the entire packet will effectively disappear.
To determine whether this is the case you'll need to use a packet sniffer on one (or both) ends of the connection. Wireshark is a good choice on a PC end, or tcpdump on the android side (you'll need root). Keep in mind that home routers may reassemble fragmented packets - this means that if you're sniffing packets from inside a home router/firewall, you might not see any fragments arrive until all of them arrive at the router (and obviously if some are getting lost this won't happen).
A better option would be to simply ensure that you're always sending packets smaller than the MTU, of course. Fragmentation is almost never the right thing to be doing. Keep in mind that the MTU may vary at various hops along the path between server and client - you can either use the common choice of a bit less than 1500 (1400 ought to be safe), or try to probe for it by setting the MTU discovery flag on your UDP packets (via IP_MTU_DISCOVER) and always sending less than the value returned by getsockopt's IP_MTU option (including on retransmits!)