How to test "UDP header incorrect length" on your own? - security

I know UDP header incorrect length is usually part of security testing as this one could crash the target machine. However, how to do that on your own?

Testing the header length of a packet is important part of security testing... if you are writing a TCP/IP stack. But no one is going to test this on a penetration test because this will have little or no affect on a real world system.
Building strange packets is useful for testing firewalls, and hping is very useful for that (as well as nmap :). Here is a good tutorial on using hping. This following command is sending the largest UDP packet possible, if you try an encode a larger size you'll get a one's complement integer overflow due to bit boundaries (which isn't very useful).
hping -2 -p 7 192.168.10.33 -d 65535 -E /root/signature.sig
If you want to verify that a malformed packet is built correctly you should grab Wireshark.

Related

ICMP timestamps added to ping echo requests in linux. How are they represented to bytes? Or how to convert UNIX Epoch time to that timestamp format?

I encountered this issue while trying to make a ping program in C myself and used wireshark for further digging into the problem: ping which sends echo requests to a destination IP also ads a timestamp field of 8 bytes (TOD timestamp) after the ICMP header in linux. Ping in Windows doesn't add that timestamp but rather I think makes the time calculations locally. Now my question is how do you convert the time from Unix Epoch format (the number of seconds from 1970 which you get with the 'time' function in C) to that TOD format of 8 bytes? I got to this question as, finally, after quite a time of research, my ping.c program sends the ICMP echo request message to the destination, where after a test with 2 hosts I noticed that it manages to arrive, but gets no ping echo reply message back while the native linux ping works properly. I can only imagine 2 possible causes:
I didnt complete well the fields of the ICMP and IP header. To be honest, I myself pretty much doubt this possiblity because wireshark shows the message arrives to the destination and is recognized as an echo request message, but doesn't trigger any echo reply answer. However, if it would to be this, the only thing I can think off is that timestamp which I don`t know how to convert in TOD form to occupy at most 8 bytes.
There might be a firewall at the destination or some other system dependent fact.
https://www.ibm.com/docs/en/zos/2.2.0?topic=problems-using-ping-command
The Ping command does not use the ICMP/ICMPv6 header sequence number field (icmp_seq or icmp6_seq) to correlate requests with ICMP/ICMPv6 Echo Replies. Instead, it uses the ICMP/ICMPv6 header identifier field (icmp_id or icmp6_id) plus an 8-byte TOD time stamp field to correlate requests with replies. The TOD time stamp is the first 8-bytes of data after the ICMP/ICMPv6 header.
Finally, to repeat the initial question:
How do you convert the UNIX Epoch time to the TOD timestamp form which linux ping adds at the end of the ICMP header/begining of data field?
An useful explanation, but I don't think sufficient I found here:
https://towardsdatascience.com/3-tips-to-handle-timestamps-in-c-ad5b36892294
I should probably mention I`m working on Ubuntu 20.04 focalfossa.
I found a related post here. The book "Principles of Operation" is mentioned in the comments. I skimmed through it but it seems to be generally lower level than C so if anyone knows another place/way to answer the question it would be better.
Background
Including the UNIX timestamp of the time of transmission in the first data bytes of the ICMP Echo message is a trick/optimization the original ping by Mike Muuss used to avoid keeping track of it locally. It exploits the following guarantee made by RFC 792's Echo or Echo Reply Message description:
The data received in the echo message must be returned in the echo reply message.
Many (if not all) BSD ping implementations are based on Mike Muuss' original implementation and therefore kept this behavior. On Linux systems, ping is typically provided by iputils, GNU inetutils, or Busybox. All exhibit the same behavior. fping is a notable exception, which stores a mapping from target host and sequence number to timestamp locally.
Implementations typically store the timestamp in the sender's native precision and byte order as opposed to a well-defined precision in conventional network byte order (big endian), that is normally used for data transmitted over the network, as it intended to be only be interpreted by the original sender and others should just treat it as opaque stream of bytes.
Because this is so common however, the Wireshark ICMP dissector (as of v3.6.2) tries to be clever and heuristically decode it nonetheless, if the first 8 data bytes look like a struct timeval timestamp in 32-bit precision for seconds and microseconds in either byte order. Please note that if the sender was actually using big endian 64-bit precision, this will fail and if it was using little endian 64-bit precision, it will truncate the microseconds before the Epochalypse and fail after that.
Obtaining and serializing epoch time
To answer your actual question:
How do you convert the UNIX Epoch time to the TOD timestamp form which linux ping adds at the end of the ICMP header/begining of data field?
Most implementations use the obsolescent gettimeofday(2) instead of the newer clock_gettime(2). The following snippet is taken from iputils:
struct icmphdr *icp /* = pointer to ICMP packet */;
struct timeval tmp_tv;
gettimeofday(&tmp_tv, NULL);
memcpy(icp + 1, &tmp_tv, sizeof(tmp_tv));
memcpy from a temporary variable instead of directly passing the icp + 1 as target to gettimeofday is used to avoid potential improper alignment, effective type and strict aliasing violation issues.
I appreciate the clear answers. I actually managed to solve the problem and make the ping function work and I have to say the problem was certainly not the timestamp because, yes indeed, i was talking about the "Echo or Echo Reply Message". One way of implementing ping is by using the ICMP feature of Echo and Echo reply messages. The fact is when I put this question I was obviously stuck probably because I wasn't clearly differentiating the main aspects of the problem. Thus, I started to examine the packet sent by the native ping on my Ubuntu 20.04 focal_fossa (with Wireshark), hopefully trying to get a better grasp of how to fill the headers of the packet sent by my program (the IP and ICMP headers). This question simply arised from the fact that I noticed that in this version of Ubuntu, ping adds a timestamp (indeed of 32 bits) after the end of ICMP header, basically in the data/payload section. As a matter of fact, I also used Wireshark on Windows10 and saw that there is no timestamp added after the header. So yes, it might be about different versions of the program being used.
What is the main point I want to emphasize is that my final version of ping has nothing to do with any timestamps. So yes, they are not a crucial aspect for ping to work.

Dropping packets with netcat using a UDP transfer?

I'm working on sending large data files between two Linux computers via a 10 Gigabit Ethernet cable and netcat with a UDP transfer, but seem to be having issues.
After running several tests, I've come to the conclusion that netcat is the issue. I've tested the UDP transfer using [UDT][1], [Tsunami-UDP]2, and a Python UDT transfer as well, and all of which have not had any packet loss issues.
On the server side, we've been doing:
cat "bigfile.txt" | pv | nc -u IP PORT
then on the client side, we've been doing:
nc -u -l PORT > "outputFile.txt"
A few things that we've noticed:
On one of the computers, regardless of whether it's the client or server, it just "hangs". That is to say, even once the transfer is complete, Linux doesn't kill the process and move to the next line in the terminal.
If we run pipe view on the receiving side as well, the incoming data rate is significantly lower than what the sending side thinks it's sending.
Running Wireshark doesn't show any packet loss.
Running the system performance monitor in Linux shows that the incoming data rate (for the receiving side) is the same as the outgoing data rate from the sending side. This is in contrast to what pipe view thinks (see #2)
We're not sure where the issue is with netcat, and if there is a way around it. Any help/insights would be greatly appreciated.
Also, for what it's worth, using netcat with a TCP transfer works fine. And, I do understand that UDP isn't known for reliability, and that packet loss should be expected, but it's the protocol we must use.
Thanks
It could well be that the sending instance is sending the data too fast for the receiving instance. Note that this can occur even if you see no drops on the receiving NIC (as you seem to be saying), because the loss can occur at OS level instead. Your OS could have its UDP buffers overflowing. Run this command:
watch -d "cat /proc/net/snmp | grep -w Udp"
To see if your RcvbufErrors field is non-zero and/or growing while your file transfer is going on.
This answer (How to send only one UDP packet with netcat?) says that nc sends one packet per line. Assuming that's true, this could lead to a significantly higher number of packets than your other transfer mechanisms. Presumably, as #Smeeheey suggested, you're running out of receive buffers on the receiving end.
To cause your sending end to exit, you can add -q 1 to the command line (exit 1 second after seeing end of file).
But there's no way that the the receiving end nc can know when the transfer is complete. This is why these other mechanisms are "protocols" -- they have mechanisms built into them to communicate the bounds of a file. Raw UDP has no concept of end of file.
Tuning the Linux networking stack is a bit complicated, as there are many components to tune to figure out where data is being dropped.
If possible/feasible, I'd recommend that you start by monitoring packet drops throughout the entire network stack. Once you've done that, you can determine where exactly packets are being dropped and then adjust tuning parameters as needed. There are a lot of different files to measure with lots of different fields. I wrote a detailed blog post about monitoring and tuning each component of the Linux networking stack from top to bottom. It's a bit difficult to summarize all the information there, but take a look, I think it can help guide you.

Finding out the number of dropped packets in raw sockets

I am developing a program that sniffs network packets using a raw socket (AF_PACKET, SOCK_RAW) and processes them in some way.
I am not sure whether my program runs fast enough and succeeds to capture all packets on the socket. I am worried that the recieve buffer for this socket occainally gets full (due to traffic bursts) and some packets are dropped.
How do I know if packets were dropped due to lack of space in the
socket's receive buffer?
I have tried running ss -f link -nlp.
This outputs the number of bytes that are currently stored in the revice buffer for that socket, but I can not tell if any packets were dropped.
I am using Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-52-generic x86_64).
Thanks.
I was having a similar problem as you. I knew that tcpdump was able to to generate statistics about packet drops, so I tried to figure out how it did that. By looking at the code of tcpdump, I noticed that it is not generating those statistic by itself, but that it is using the libpcap library to get those statistics. The libpcap is on the other hand getting those statistics by accessing the if_packet.h header and calling the PACKET_STATISTICS socket option (at least I think so, but I'm no C expert).
Therefore, I saw only two solutions to the problem:
I had to interact somehow with the linux header files from my Pyhton script to get the packet statistics, which seemed a bit complicated.
Use the Python version of libpcap which is pypcap to get those information.
Since I had no clue how to do the first thing, I implemented the second option. Here is an example how to get packet statistics using pypcap and how to get the packet data using dpkg:
import pcap
import dpkt
import socket
pc=pcap.pcap(name="eth0", timeout_ms=10000, immediate=True)
def packet_handler(ts,pkt):
#printing packet statistic (packets received, packets dropped, packets dropped by interface
print pc.stats()
#example packet parsing using dpkt
eth=dpkt.ethernet.Ethernet(pkt)
if eth.type != dpkt.ethernet.ETH_TYPE_IP:
return
ip =eth.data
layer4=ip.data
ipsrc=socket.inet_ntoa(ip.src)
ipdst=socket.inet_ntoa(ip.dst)
pc.loop(0,packet_handler)
tpacket_stats structure is defined in linux/packet.h header file
Create variable using the tpacket_stats structre and pass it to getSockOpt with PACKET_STATISTICS SOL_SOCKET options will give packets received and dropped count.
-- some times drop can be due to buffer size
-- so if you want to decrease the drop count check increasing the buffersize using setsockopt function
First off, switch your operating system.
You need a reliable, network oriented operating system. Not some pink fluffy "ease of use" with "security" functionality enabled. NetBSD or Gentoo/ArchLinux (the bare installations, not the GUI kitted ones).
Start a simultaneous tcpdump on a network tap and capture the traffic you're supposed to receive along side of your program and compare the results.
There's no efficient way to check if you've received all the packets you intended to on the receiving end since the packets might be dropped on a lower level than you anticipate.
Also this is a question for Unix # StackOverflow, there's no programming here what I can see, at least there's no code.
The only certain way to verify packet drops is to have a much more beefy sender (perhaps a farm of machines that send packets) to a single client, record every packet sent to your reciever. Have the statistical data analyzed and compared against your senders and see how much you dropped.
The cheaper way is to buy a network tap or even more ad-hoc enable port mirroring in your switch if possible. This enables you to dump as much traffic as possible into a second machine.
This will give you a more accurate result because your application machine will be busy as it is taking care of incoming traffic and processing it.
Further more, this is why network taps are effective because they split the communication up into two channels, the receiving and sending directions of your traffic if you will. This enables you to capture traffic on two separate machines (also using tcpdump, but instead of a mirrored port, you get a more accurate traffic mirroring).
So either use port mirroring
Or you buy one of these:

TCP handshake with SOCK_RAW socket

Ok, I realize this situation is somewhat unusual, but I need to establish a TCP connection (the 3-way handshake) using only raw sockets (in C, in linux) -- i.e. I need to construct the IP headers and TCP headers myself. I'm writing a server (so I have to first respond to the incoming SYN packet), and for whatever reason I can't seem to get it right. Yes, I realize that a SOCK_STREAM will handle this for me, but for reasons I don't want to go into that isn't an option.
The tutorials I've found online on using raw sockets all describe how to build a SYN flooder, but this is somewhat easier than actually establishing a TCP connection, since you don't have to construct a response based on the original packet. I've gotten the SYN flooder examples working, and I can read the incoming SYN packet just fine from the raw socket, but I'm still having trouble creating a valid SYN/ACK response to an incoming SYN from the client.
So, does anyone know a good tutorial on using raw sockets that goes beyond creating a SYN flooder, or does anyone have some code that could do this (using SOCK_RAW, and not SOCK_STREAM)? I would be very grateful.
MarkR is absolutely right -- the problem is that the kernel is sending reset packets in response to the initial packet because it thinks the port is closed. The kernel is beating me to the response and the connection dies. I was using tcpdump to monitor the connection already -- I should have been more observant and noticed that there were TWO replies one of which was a reset that was screwing things up, as well as the response my program created. D'OH!
The solution that seems to work best is to use an iptables rule, as suggested by MarkR, to block the outbound packets. However, there's an easier way to do it than using the mark option, as suggested. I just match whether the reset TCP flag is set. During the course of a normal connection this is unlikely to be needed, and it doesn't really matter to my application if I block all outbound reset packets from the port being used. This effectively blocks the kernel's unwanted response, but not my own packets. If the port my program is listening on is 9999 then the iptables rule looks like this:
iptables -t filter -I OUTPUT -p tcp --sport 9999 --tcp-flags RST RST -j DROP
You want to implement part of a TCP stack in userspace... this is ok, some other apps do this.
One problem you will come across is that the kernel will be sending out (generally negative, unhelpful) replies to incoming packets. This is going to screw up any communication you attempt to initiate.
One way to avoid this is to use an IP address and interface that the kernel does not have its own IP stack using- which is fine but you will need to deal with link-layer stuff (specifically, arp) yourself. That would require a socket lower than IPPROTO_IP, SOCK_RAW - you need a packet socket (I think).
It may also be possible to block the kernel's responses using an iptables rule- but I rather suspect that the rules will apply to your own packets as well somehow, unless you can manage to get them treated differently (perhaps applying a netfilter "mark" to your own packets?)
Read the man pages
socket(7)
ip(7)
packet(7)
Which explain about various options and ioctls which apply to types of sockets.
Of course you'll need a tool like Wireshark to inspect what's going on. You will need several machines to test this, I recommend using vmware (or similar) to reduce the amount of hardware required.
Sorry I can't recommend a specific tutorial.
Good luck.
I realise that this is an old thread, but here's a tutorial that goes beyond the normal SYN flooders: http://www.enderunix.org/docs/en/rawipspoof/
Hope it might be of help to someone.
I can't help you out on any tutorials.
But I can give you some advice on the tools that you could use to assist in debugging.
First off, as bmdhacks has suggested, get yourself a copy of wireshark (or tcpdump - but wireshark is easier to use). Capture a good handshake. Make sure that you save this.
Capture one of your handshakes that fails. Wireshark has quite good packet parsing and error checking, so if there's a straightforward error it will probably tell you.
Next, get yourself a copy of tcpreplay. This should also include a tool called "tcprewrite".
tcprewrite will allow you to split your previously saved capture files into two - one for each side of the handshake.
You can then use tcpreplay to play back one side of the handshake so you have a consistent set of packets to play with.
Then you use wireshark (again) to check your responses.
I don't have a tutorial, but I recently used Wireshark to good effect to debug some raw sockets programming I was doing. If you capture the packets you're sending, wireshark will do a good job of showing you if they're malformed or not. It's useful for comparing to a normal connection too.
There are structures for IP and TCP headers declared in netinet/ip.h & netinet/tcp.h respectively. You may want to look at the other headers in this directory for extra macros & stuff that may be of use.
You send a packet with the SYN flag set and a random sequence number (x). You should receive a SYN+ACK from the other side. This packet will have an acknowledgement number (y) that indicates the next sequence number the other side is expecting to receive as well as another sequence number (z). You send back an ACK packet that has sequence number x+1 and ack number z+1 to complete the connection.
You also need to make sure you calculate appropriate TCP/IP checksums & fill out the remainder of the header for the packets you send. Also, don't forget about things like host & network byte order.
TCP is defined in RFC 793, available here: http://www.faqs.org/rfcs/rfc793.html
Depending on what you're trying to do it may be easier to get existing software to handle the TCP handshaking for you.
One open source IP stack is lwIP (http://savannah.nongnu.org/projects/lwip/) which provides a full tcp/ip stack. It is very possible to get it running in user mode using either SOCK_RAW or pcap.
if you are using raw sockets, if you send using different source mac address to the actual one, linux will ignore the response packet and not send an rst.

Probability of finding TCP packets with the same payload?

I had a discussion with a developer earlier today re identifying TCP packets going out on a particular interface with the same payload. He told me that the probability of finding a TCP packet that has an equal payload (even if the same data is sent out several times) is very low due to the way TCP packets are constructed at system level. I was aware this may be the case due to the system's MTU settings (usually 1500 bytes) etc., but what sort of probability stats am I really looking at? Are there any specific protocols that would make it easier identifying matching payloads?
It is the protocol running over tcp that defines the uniqueness of the payload, not the tcp protocol itself.
For example, you might naively think that HTTP requests would all be identical when asking for a server's home page, but the referrer and user agent strings make the payloads different.
Similarly, if the response is dynamically generated, it may have a date header:
Date: Fri, 12 Sep 2008 10:44:27 GMT
So that will render the response payloads different. However, subsequent payloads may be identical, if the content is static.
Keep in mind that the actual packets will be different because of differing sequence numbers, which are supposed to be incrementing and pseudorandom.
Chris is right. More specifically, two or three pieces of information in the packet header should be different:
the sequence number (which is
intended to be unpredictable) which
is increases with the number of
bytes transmitted and received.
the timestamp, a field containing two
timestamps (although this field is optional).
the checksum, since both the payload and header are checksummed, including the changing sequence number.
EDIT: Sorry, my original idea was ridiculous.
You got me interested so I googled a little bit and found this. If you wanted to write your own tool you would probably have to inspect each payload, the easiest way would probably be some sort of hash/checksum to check for identical payloads. Just make sure you are checking the payload, not the whole packet.
As for the statistics I will have to defer to someone with greater knowledge on the workings of TCP.
Sending the same PAYLOAD is probably fairly common (particularly if you're running some sort of network service). If you mean sending out the same tcp segment (header and all) or the whole network packet (ip and up), then the probability is substantially reduced.

Resources