Is there a way to limit number of packet captures with tcpflow? - capture

I want to limit the number of captured packets, such as
tcpdump -c 20
but using tcpflow instead. is this possible? console throws a syntax error using -c with tcpflow
any help appreciated

I have not used tcpflow myself, but I understand it is more "flow-oriented" than packet-oriented.
From its main page, it seems you cannot limit by number of packets (no option for that), but you can limit the number of bytes per flow (-b). If you are looking at a limited amount of flows, you may be able to obtain somewhat similar results by manipulating that.

Related

Linux. Can packets pass libpcap by?

I am writing a linux program that controls internet traffic. In other words, how much bytes I have used while some amount of time. I use a Pcap4J for java (implementation of libpcap) and I have question about it. What happens if my program hasn't proceeded a package while a new one has arrived.
1. It slows down the download(upload) rate for the whole OS?
2. It skips a new one, and my program will never know that it passed by?
In other words, I've downloaded the 1G of data on my computer. How many bytes my program get: 100% or it may be passed my program by but still got the destination place!
And give me know if it is a bad idea to write a control traffic app using this lib!
Your application loses packets. In your words, they pass by.
However, if your idea is to have a metric of how many packets went in and out of your system in a given time, there are definitely better ways to achieve it.
On Linux you can just do a script that does something like this:
DEVICE=eth0
RX0=$(cat /sys/net/$DEVICE/statistics/rx_bytes)
TX0=$(cat /sys/net/$DEVICE/statistics/tx_bytes)
while : ; do
sleep 5
RX1=$(cat /sys/net/$DEVICE/statistics/rx_bytes)
TX1=$(cat /sys/net/$DEVICE/statistics/tx_bytes)
echo "RX bytes: $(($RX1-$RX0))"
echo "TX bytes: $(($TX1-$TX0))"
RX0=RX1
TX0=TX1
done
You can adjust times or whether is a parameter, I think you'll get the idea.

Tools to measure TCP connection latency

I want to measure the time it takes to finish TCP three-way handshake. I want to measure this on my Linux server. What are best practices for this? Notice that I want to measure this latency on server side and for all connections that are being accepted.
Sorry, you're right I misunderstood the question.
I think you could achieve this using 'tcpdump' which is a really complete tool to see all the events in tcp traffic.
By your comment I see you want to measure the time between SYNC to the ACK packet.
With tcpdump you can filter the connections and specific packages:
tcpdump -r <interface> "tcp[tcpflags] & (tcp-syn|tcp-ack) != 0"
And by default the time will be displayed in the first column of tcpdump results.
Check this, I think it could help.
I don't know if it's the best practice. Also If you want to manipulate that data, you can pipe the results and use awk or something similar.
EDIT: By searching in google I also found this resource which is really interesting.

Dropping packets with netcat using a UDP transfer?

I'm working on sending large data files between two Linux computers via a 10 Gigabit Ethernet cable and netcat with a UDP transfer, but seem to be having issues.
After running several tests, I've come to the conclusion that netcat is the issue. I've tested the UDP transfer using [UDT][1], [Tsunami-UDP]2, and a Python UDT transfer as well, and all of which have not had any packet loss issues.
On the server side, we've been doing:
cat "bigfile.txt" | pv | nc -u IP PORT
then on the client side, we've been doing:
nc -u -l PORT > "outputFile.txt"
A few things that we've noticed:
On one of the computers, regardless of whether it's the client or server, it just "hangs". That is to say, even once the transfer is complete, Linux doesn't kill the process and move to the next line in the terminal.
If we run pipe view on the receiving side as well, the incoming data rate is significantly lower than what the sending side thinks it's sending.
Running Wireshark doesn't show any packet loss.
Running the system performance monitor in Linux shows that the incoming data rate (for the receiving side) is the same as the outgoing data rate from the sending side. This is in contrast to what pipe view thinks (see #2)
We're not sure where the issue is with netcat, and if there is a way around it. Any help/insights would be greatly appreciated.
Also, for what it's worth, using netcat with a TCP transfer works fine. And, I do understand that UDP isn't known for reliability, and that packet loss should be expected, but it's the protocol we must use.
Thanks
It could well be that the sending instance is sending the data too fast for the receiving instance. Note that this can occur even if you see no drops on the receiving NIC (as you seem to be saying), because the loss can occur at OS level instead. Your OS could have its UDP buffers overflowing. Run this command:
watch -d "cat /proc/net/snmp | grep -w Udp"
To see if your RcvbufErrors field is non-zero and/or growing while your file transfer is going on.
This answer (How to send only one UDP packet with netcat?) says that nc sends one packet per line. Assuming that's true, this could lead to a significantly higher number of packets than your other transfer mechanisms. Presumably, as #Smeeheey suggested, you're running out of receive buffers on the receiving end.
To cause your sending end to exit, you can add -q 1 to the command line (exit 1 second after seeing end of file).
But there's no way that the the receiving end nc can know when the transfer is complete. This is why these other mechanisms are "protocols" -- they have mechanisms built into them to communicate the bounds of a file. Raw UDP has no concept of end of file.
Tuning the Linux networking stack is a bit complicated, as there are many components to tune to figure out where data is being dropped.
If possible/feasible, I'd recommend that you start by monitoring packet drops throughout the entire network stack. Once you've done that, you can determine where exactly packets are being dropped and then adjust tuning parameters as needed. There are a lot of different files to measure with lots of different fields. I wrote a detailed blog post about monitoring and tuning each component of the Linux networking stack from top to bottom. It's a bit difficult to summarize all the information there, but take a look, I think it can help guide you.

Getting UDP traffic statistic between particular hosts on Linux

I need to gather some network statistic to test my server application. I've tried many linux tools, but nothing I've found suits my needs.
Basically I want to gather some UDP statistics (bytes/time_interval, packets/time_interval, packets_loss), but regarding only two particular hosts - for example I want to get UDP statistic from traffic going from IP_A:PORT_A to IP_B:PORT_B.
Tools like tcpdump/wireshark can easily dump such traffic but I have problems with getting statistics like temporary speed (too see throughput peeks), and linux system statistics gives me number for all traffic.
It would be better to get text output so it will be possible to parse it.
Anyone has any idea how can I achieve it?
Thanks in advance
Harnen
Here's a tutorial for the libpcap library:
http://www.systhread.net/texts/200805lpcap1.php
To determine packets lost, your program will probably want to work on a pair of logs, and make sure UDP messages on the source are found on the destination. A good method for doing this is to maintain a window of packets equal to the amount of time your timeout is set to, load all the packets into the window, sort them, then search for all the packets in the desired time frame, marking them as found as you go. Once you've exhausted a minute, remove half of that minute from the buffer, and load the next thirty seconds and re-sort.
If you have lots (millions? probably should profile it) of packets, it may be faster to use what's called a Counting Bloom Filter, so you can determine if your packet is "probably" in there very quickly.
If you weren't looking for programming advice, take your question to serverfault.

How to test "UDP header incorrect length" on your own?

I know UDP header incorrect length is usually part of security testing as this one could crash the target machine. However, how to do that on your own?
Testing the header length of a packet is important part of security testing... if you are writing a TCP/IP stack. But no one is going to test this on a penetration test because this will have little or no affect on a real world system.
Building strange packets is useful for testing firewalls, and hping is very useful for that (as well as nmap :). Here is a good tutorial on using hping. This following command is sending the largest UDP packet possible, if you try an encode a larger size you'll get a one's complement integer overflow due to bit boundaries (which isn't very useful).
hping -2 -p 7 192.168.10.33 -d 65535 -E /root/signature.sig
If you want to verify that a malformed packet is built correctly you should grab Wireshark.

Resources