How to sent arp packets to queue from arptables - linux

My aim was to find a way to process(drop,accept,forward and etc.) packets that are from Layer 2 ...
I know that "iptables" in *inux allow us to send packet to "NFQUEUE" for further packet processing ....
but it support layer 3 packets ... which means it does not detect packets that are from Layer 2..
although "arptables" detect packets that are destine for Layer 2, I couldn't find a way to send it to "NFQUEUE"
is there any way that can let us choose whether or not we should accept/drop/continue the layer packets?

Only ebtables has a target (-j arpreply) to generate ARP packets as of this date, though you can filter with either ebtables and arptables. NFQUEUE is also usable from ebtables, and in fact, can be quickly extended for arptables by just adding an entry to it, but so far, arptables has been pretty much a nice program, even more so than ebtables.

Related

tcpreplay is sending packets out of order?

When I use 'tcpreplay' to send packets to my switch, I found that packets are out of order. For example, using tcpreplay -i eth1 test.pcap, I get:
I send packets like **[1,2,3,4,5,……]**,
but switch received **[1,3,4,2,5,……]**.
Does this problem look familiar? How did you solve it?
When you say switch received a different packet order- how are you determining this is the case? I ask because if you are sniffing on the switch port that would seem like a valid way to check for this, but if you're using a SPAN port then yeah, switches can re-order frames in my experience so I'm not that surprised.
When you run tcpdump on the tcpreplay box, which order does it show the packets being sent? Also, is there another switch in between? Because a lot of switches use a "store and forward" approach which can reorder frames (this is also why SPAN ports tend to re-order).
Lastly, tcpreplay always sends packets in order to the kernel/NIC driver/NIC because it processes the pcap file sequentially. If your computer is actually sending frames out of order, then that is happening either in the kernel, NIC driver or NIC hardware/firmware.

Linux (uclinux 2.4.x)

I have one doubt.
Using uclinux 2.4.x. In this linux I have my own adapter code for reading the frames from ingress port.
Have added a debug log at exact place from where reading frames word by word and it is sure that I received all number of frames transmitted by peer.
( at Layer2 I am receiving all frames).
From here onward now invoking "netif_rx" to send all received framer to upper layer (i.e Network layer and Transport Layer).
Doubt : I observe that there are some drop of packets at (Transport Layer)UDP protocol.
How did I confirm: Added a count check at Layer 1 and Layer3(UDP Layer) and both counts are not equal.
that means even if we receiving all framer from layer 1 but when it reaching to UDP in between somewhere drop is happening.
So, Could any one suggest that where exactly the problem could be, How to check if memory is full or skb_alloc is not happening for more packets at UDP layer.
Kindly suggest your opinion, It will be of great help and support.
Please let me know if you need more information.
BR
Karn

Packet sniffing with Channel hopping in linux

I want to scan the WiFi on b/g interface, and I want to sniff packets on each channel, by spending 100 ms on each channel. One of the biggest requirements I have is not to store the packets I get (because of less disk space), my application will parse the packets, retrieve Tx MAC and RSSI, and would construct the list (MAC, Avg RSSI, #Records) at the end of every minute, and then clear this list and start over again.
I've figured out two ways to do channel hop on linux:
Option 1: Use wi_set_channel(struct wif *, channel number) system call in C, and write the code in C to sniff all the packets
Option 2: Use linux command iw dev wlan0 set channel 4, and use any language like python+scapy OR C to sniff the packets
I'd like to know which is more efficient of the two, if at all, so that the delay/wait for WiFi interface to switch to a different channel is minimal. I suspect that this delay would mean loss of packet while the switch to a different channel happens, is that the case?
I would also like to know some of the other ways to solve this problem in linux.
Answer to your first question us straight forward, use Option1 and have two threads doing the work - one thread populating an in-memory circular buffer with packets collected from channels and second thread processing them in sequence. You can determine best packet discarding algo depending on the measured performance of processing thread and other factors if any.
As for the second question, I would go with the above for being in complete control on exactly how you can tune the algorithm rather than depending on canned processing tools.

Capturing performance with pcap vs raw socket

When capturing network traffic for debugging, there seem to be two common approaches:
Use a raw socket.
Use libpcap.
Performance-wise, is there much difference between these two approaches? libpcap seems a nice compatible way to listen to a real network connection or to replay some canned data, but does that feature set come with a performance hit?
The answer is intended to explain more about the libpcap.
libpcap uses the PF_PACKET to capture packets on an interface. Refer to the following link.
https://www.kernel.org/doc/Documentation/networking/packet_mmap.txt
From the above link
In Linux 2.4/2.6/3.x if PACKET_MMAP is not enabled, the capture process is very
inefficient. It uses very limited buffers and requires one system call to
capture each packet, it requires two if you want to get packet's timestamp
(like libpcap always does).
In the other hand PACKET_MMAP is very efficient. PACKET_MMAP provides a size 
configurable circular buffer mapped in user space that can be used to either
send or receive packets. This way reading packets just needs to wait for them,
most of the time there is no need to issue a single system call. Concerning
transmission, multiple packets can be sent through one system call to get the
highest bandwidth. By using a shared buffer between the kernel and the user
also has the benefit of minimizing packet copies.
performance improvement may vary depending on PF_PACKET implementation is used. 
From https://www.kernel.org/doc/Documentation/networking/packet_mmap.txt -
It is said that TPACKET_V3 brings the following benefits:
 *) ~15 - 20% reduction in CPU-usage
 *) ~20% increase in packet capture rate
The downside of using libpcap -
If an application needs to hold the packet then it may need to make
a copy of the incoming packet.
Refer to manpage of pcap_next_ex.
pcap_next_ex() reads the next packet and returns a success/failure indication. If the packet was read without problems, the pointer
pointed to by the pkt_header argument is set to point to the
pcap_pkthdr struct for the packet, and the pointer pointed to by the
pkt_data argument is set to point to the data in the packet. The
struct pcap_pkthdr and the packet data are not to be freed by the
caller, and are not guaranteed to be valid after the next call to
pcap_next_ex(), pcap_next(), pcap_loop(), or pcap_dispatch(); if the
code needs them to remain valid, it must make a copy of them.
Performance penalty if application only interested in incoming
packets.
PF_PACKET works as taps in the kernel i.e. all the incoming and outgoing packets are delivered to PF_SOCKET.  Which results in an expensive call to packet_rcv for all the outgoing packets.  Since libpcap uses the PF_PACKET, so libpcap can capture all the incoming as well outgoing packets.
if application is only interested in incoming packets then outgoing packets can be discarded by setting pcap_setdirection on the libpcap handle. libpcap internally discards the outgoing packets by checking the flags on the packet metadata.
So in essence, outgoing packets are still seen by the libpcap but only to be discarded later. This is performance penalty for the application which is interested in incoming packets only.
Raw packet works on IP level (OSI layer 3), pcap on data link layer (OSI layer 2). So its less a performance issue and more a question of what you want to capture. If performance is your main issue search for PF_RING etc, that's what current IDS use for capturing.
Edit: raw packets can be either IP level (AF_INET) or data link layer (AF_PACKET), pcap might actually use raw sockets, see Does libpcap use raw sockets underneath them?

libpcap setfilter() function and packet loss

this is my first question here #stackoverflow.
I'm writing a monitoring tool for some VoIP production servers, particularly a sniff tool that allows to capture all traffic (VoIP calls) that match a given pattern using pcap library in Perl.
I cannot use poor selective filters like e.g. "udp" and then do all the filtering in my app's code, because that would involve too much traffic and the kernel wouldn't cope reporting packet loss.
What I do then is to iteratively build the more selective filter possible during the capture. At the beginning I capture only (all) SIP signalling traffic and IP fragments (the pattern match has to be done at application level in any case) then when I find some information about RTP into SIP packets, I add 'or' clauses to the actual filter-string with specific IP and PORT and re-set the filter with setfilter().
So basically something like this:
Initial filter : "(udp and port 5060) or (udp and ip[6:2] & 0x1fff != 0)" -> captures all SIP traffic and IP fragments
Updated filter : "(udp and port 5060) or (udp and ip[6:2] & 0x1fff != 0) or (host IP and port PORT)" -> Captures also the RTP on specific IP,PORT
Updated filter : "(udp and port 5060) or (udp and ip[6:2] & 0x1fff != 0) or (host IP and port PORT) or (host IP2 and port PORT2)" -> Captures a second RTP stream as well
And so on.
This works quite well, as I'm able to get the 'real' packet loss of RTP streams for monitoring purposes, whereas with a poor selective filter version of my tool, the RTP packet loss percentage wasn't reliable because there was some packets missing due to packet drop by kernel.
But let's get to the drawback of this approach.
Calling setfilter() while capturing involves the fact that libpcap drops packets received "while changing the filter" as stated in code comments for function set_kernel_filter() into pcap-linux.c (checked libpcap version 0.9 and 1.1).
So what happens is that when I call setfilter() and some packets arrive IP-fragmented, I do loose some fragments, and this is not reported by libpcap statistics at the end: I spotted it digging into traces.
Now, I understand the reason why this action is done by libpcap, but in my case I definitely need not to have any packet drop (I don't care about getting some unrelated traffic).
Would you have any idea on how to solve this problem that is not modifying libpcap's code?
What about starting up a new process with the more specific filter. You could have two parallel pcap captures going at once. After some time (or checking that both received the same packets) you could stop the original.
Can you just capture all RTP traffic?
From capture filters the suggestion for RTP traffic is:
udp[1] & 1 != 1 && udp[3] & 1 != 1 && udp[8] & 0x80 == 0x80 && length < 250
As the link points out you will get a few false positives where DNS and possibly other UDP packets occassionally contain the header byte, 0x80, used by RTP packets, however the number should be negligible and not enough to cause kernel drops.
Round hole, square peg.
You have a tool that doesn't quite fit your need.
Another option is to do a first-level filter (as above, that captures a lot more than wanted) and pipe it into an another tool that implements the finer filter you want (down to the per-call case). If that first-level filter is too much for the kernel due to heavy RTP traffic, then you may need to do something else like keep a stable of processes to capture individual calls (so you're not changing the filter on the "main" process; it's simply instructing the others how to set their filters.)
Yes, this may mean merging captures, either on the fly (pass them all to a "save the capture" process) or after the fact.
You do realize that you may well miss RTP packets anyways if you don't install your filters fast. Don't forget that RTP packets could come in for the originator before the 200 OK comes in (or right together), and they may go back to the answerer before the ACK (or on top of it). Also don't forget INVITE with no SDP (offer in the 200 OK, answer in ACK). Etc, etc. :-)

Resources